May 14 23:49:06.351820 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:49:06.351845 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:49:06.351854 kernel: KASLR enabled May 14 23:49:06.351860 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 14 23:49:06.351868 kernel: printk: bootconsole [pl11] enabled May 14 23:49:06.351874 kernel: efi: EFI v2.7 by EDK II May 14 23:49:06.351881 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 14 23:49:06.351887 kernel: random: crng init done May 14 23:49:06.351893 kernel: secureboot: Secure boot disabled May 14 23:49:06.351900 kernel: ACPI: Early table checksum verification disabled May 14 23:49:06.351905 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 14 23:49:06.351912 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.351918 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.351926 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 14 23:49:06.351933 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.353099 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.353126 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.353141 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.353148 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.353154 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.353161 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 14 23:49:06.353167 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:06.353174 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 14 23:49:06.353180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 14 23:49:06.353186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 14 23:49:06.353192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 14 23:49:06.353199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 14 23:49:06.353205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 14 23:49:06.353214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 14 23:49:06.353220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 14 23:49:06.353226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 14 23:49:06.353233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 14 23:49:06.353239 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 14 23:49:06.353245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 14 23:49:06.353252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 14 23:49:06.353258 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 14 23:49:06.353264 kernel: Zone ranges: May 14 23:49:06.353270 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 14 23:49:06.353277 kernel: DMA32 empty May 14 23:49:06.353283 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 14 23:49:06.353294 kernel: Movable zone start for each node May 14 23:49:06.353307 kernel: Early memory node ranges May 14 23:49:06.353313 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 14 23:49:06.353320 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 14 23:49:06.353327 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 14 23:49:06.353335 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 14 23:49:06.353341 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 14 23:49:06.353348 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 14 23:49:06.353355 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 14 23:49:06.353361 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 14 23:49:06.353368 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 14 23:49:06.353375 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 14 23:49:06.353382 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 14 23:49:06.353388 kernel: psci: probing for conduit method from ACPI. May 14 23:49:06.353395 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:49:06.353401 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:49:06.353408 kernel: psci: MIGRATE_INFO_TYPE not supported. May 14 23:49:06.353417 kernel: psci: SMC Calling Convention v1.4 May 14 23:49:06.353424 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 14 23:49:06.353430 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 14 23:49:06.353437 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:49:06.353444 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:49:06.353452 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 23:49:06.353459 kernel: Detected PIPT I-cache on CPU0 May 14 23:49:06.353466 kernel: CPU features: detected: GIC system register CPU interface May 14 23:49:06.353473 kernel: CPU features: detected: Hardware dirty bit management May 14 23:49:06.353480 kernel: CPU features: detected: Spectre-BHB May 14 23:49:06.353487 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:49:06.353496 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:49:06.353503 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:49:06.353510 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 14 23:49:06.353516 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:49:06.353523 kernel: alternatives: applying boot alternatives May 14 23:49:06.353532 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:06.353539 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:49:06.353546 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:49:06.353553 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:49:06.353560 kernel: Fallback order for Node 0: 0 May 14 23:49:06.353567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 14 23:49:06.353575 kernel: Policy zone: Normal May 14 23:49:06.353582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:49:06.353588 kernel: software IO TLB: area num 2. May 14 23:49:06.353595 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) May 14 23:49:06.353602 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) May 14 23:49:06.353609 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 23:49:06.353615 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:49:06.353623 kernel: rcu: RCU event tracing is enabled. May 14 23:49:06.353630 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 23:49:06.353637 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:49:06.353644 kernel: Tracing variant of Tasks RCU enabled. May 14 23:49:06.353652 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:49:06.353659 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 23:49:06.353666 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:49:06.353672 kernel: GICv3: 960 SPIs implemented May 14 23:49:06.353679 kernel: GICv3: 0 Extended SPIs implemented May 14 23:49:06.353685 kernel: Root IRQ handler: gic_handle_irq May 14 23:49:06.353692 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:49:06.353699 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 14 23:49:06.353706 kernel: ITS: No ITS available, not enabling LPIs May 14 23:49:06.353713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:49:06.353720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:49:06.353726 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:49:06.353736 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:49:06.353743 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:49:06.353750 kernel: Console: colour dummy device 80x25 May 14 23:49:06.353757 kernel: printk: console [tty1] enabled May 14 23:49:06.353764 kernel: ACPI: Core revision 20230628 May 14 23:49:06.353771 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:49:06.353779 kernel: pid_max: default: 32768 minimum: 301 May 14 23:49:06.353785 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:49:06.353793 kernel: landlock: Up and running. May 14 23:49:06.353802 kernel: SELinux: Initializing. May 14 23:49:06.353809 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:06.353816 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:06.353823 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:06.353830 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:06.353838 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 14 23:49:06.353845 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 14 23:49:06.353861 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 14 23:49:06.353868 kernel: rcu: Hierarchical SRCU implementation. May 14 23:49:06.353875 kernel: rcu: Max phase no-delay instances is 400. May 14 23:49:06.353883 kernel: Remapping and enabling EFI services. May 14 23:49:06.353890 kernel: smp: Bringing up secondary CPUs ... May 14 23:49:06.353899 kernel: Detected PIPT I-cache on CPU1 May 14 23:49:06.353906 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 14 23:49:06.353914 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:49:06.353921 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:49:06.353928 kernel: smp: Brought up 1 node, 2 CPUs May 14 23:49:06.353937 kernel: SMP: Total of 2 processors activated. May 14 23:49:06.353960 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:49:06.353968 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 14 23:49:06.353975 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:49:06.353982 kernel: CPU features: detected: CRC32 instructions May 14 23:49:06.353990 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:49:06.353997 kernel: CPU features: detected: LSE atomic instructions May 14 23:49:06.354004 kernel: CPU features: detected: Privileged Access Never May 14 23:49:06.354012 kernel: CPU: All CPU(s) started at EL1 May 14 23:49:06.354023 kernel: alternatives: applying system-wide alternatives May 14 23:49:06.354030 kernel: devtmpfs: initialized May 14 23:49:06.354037 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:49:06.354045 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 23:49:06.354053 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:49:06.354061 kernel: SMBIOS 3.1.0 present. May 14 23:49:06.354068 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 14 23:49:06.354076 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:49:06.354095 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:49:06.354105 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:49:06.354113 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:49:06.354121 kernel: audit: initializing netlink subsys (disabled) May 14 23:49:06.354128 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 14 23:49:06.354136 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:49:06.354144 kernel: cpuidle: using governor menu May 14 23:49:06.354151 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:49:06.354158 kernel: ASID allocator initialised with 32768 entries May 14 23:49:06.354166 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:49:06.354174 kernel: Serial: AMBA PL011 UART driver May 14 23:49:06.354183 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:49:06.354190 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:49:06.354198 kernel: Modules: 509264 pages in range for PLT usage May 14 23:49:06.354205 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:49:06.354213 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:49:06.354220 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:49:06.354227 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:49:06.354234 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:49:06.354244 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:49:06.354251 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:49:06.354259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:49:06.354266 kernel: ACPI: Added _OSI(Module Device) May 14 23:49:06.354273 kernel: ACPI: Added _OSI(Processor Device) May 14 23:49:06.354281 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:49:06.354288 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:49:06.354296 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:49:06.354303 kernel: ACPI: Interpreter enabled May 14 23:49:06.354313 kernel: ACPI: Using GIC for interrupt routing May 14 23:49:06.354320 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 14 23:49:06.354328 kernel: printk: console [ttyAMA0] enabled May 14 23:49:06.354335 kernel: printk: bootconsole [pl11] disabled May 14 23:49:06.354342 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 14 23:49:06.354350 kernel: iommu: Default domain type: Translated May 14 23:49:06.354357 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:49:06.354365 kernel: efivars: Registered efivars operations May 14 23:49:06.354372 kernel: vgaarb: loaded May 14 23:49:06.354383 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:49:06.354390 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:49:06.354398 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:49:06.354406 kernel: pnp: PnP ACPI init May 14 23:49:06.354413 kernel: pnp: PnP ACPI: found 0 devices May 14 23:49:06.354421 kernel: NET: Registered PF_INET protocol family May 14 23:49:06.354428 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:49:06.354436 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:49:06.354443 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:49:06.354453 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:49:06.354460 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:49:06.354468 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:49:06.354475 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:06.354483 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:06.354490 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:49:06.354497 kernel: PCI: CLS 0 bytes, default 64 May 14 23:49:06.354504 kernel: kvm [1]: HYP mode not available May 14 23:49:06.354512 kernel: Initialise system trusted keyrings May 14 23:49:06.354522 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:49:06.354529 kernel: Key type asymmetric registered May 14 23:49:06.354537 kernel: Asymmetric key parser 'x509' registered May 14 23:49:06.354544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:49:06.354551 kernel: io scheduler mq-deadline registered May 14 23:49:06.354558 kernel: io scheduler kyber registered May 14 23:49:06.354565 kernel: io scheduler bfq registered May 14 23:49:06.354573 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:49:06.354580 kernel: thunder_xcv, ver 1.0 May 14 23:49:06.354590 kernel: thunder_bgx, ver 1.0 May 14 23:49:06.354597 kernel: nicpf, ver 1.0 May 14 23:49:06.354604 kernel: nicvf, ver 1.0 May 14 23:49:06.354815 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:49:06.354889 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:49:05 UTC (1747266545) May 14 23:49:06.354899 kernel: efifb: probing for efifb May 14 23:49:06.354907 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 14 23:49:06.354914 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 14 23:49:06.354924 kernel: efifb: scrolling: redraw May 14 23:49:06.354931 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 23:49:06.360775 kernel: Console: switching to colour frame buffer device 128x48 May 14 23:49:06.360819 kernel: fb0: EFI VGA frame buffer device May 14 23:49:06.360827 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 14 23:49:06.360835 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:49:06.360842 kernel: No ACPI PMU IRQ for CPU0 May 14 23:49:06.360850 kernel: No ACPI PMU IRQ for CPU1 May 14 23:49:06.360858 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 14 23:49:06.360874 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:49:06.360881 kernel: watchdog: Hard watchdog permanently disabled May 14 23:49:06.360888 kernel: NET: Registered PF_INET6 protocol family May 14 23:49:06.360895 kernel: Segment Routing with IPv6 May 14 23:49:06.360903 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:49:06.360911 kernel: NET: Registered PF_PACKET protocol family May 14 23:49:06.360918 kernel: Key type dns_resolver registered May 14 23:49:06.360925 kernel: registered taskstats version 1 May 14 23:49:06.360932 kernel: Loading compiled-in X.509 certificates May 14 23:49:06.360951 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:49:06.360959 kernel: Key type .fscrypt registered May 14 23:49:06.360966 kernel: Key type fscrypt-provisioning registered May 14 23:49:06.360973 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:49:06.360980 kernel: ima: Allocated hash algorithm: sha1 May 14 23:49:06.360987 kernel: ima: No architecture policies found May 14 23:49:06.360995 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:49:06.361009 kernel: clk: Disabling unused clocks May 14 23:49:06.361017 kernel: Freeing unused kernel memory: 38336K May 14 23:49:06.361027 kernel: Run /init as init process May 14 23:49:06.361035 kernel: with arguments: May 14 23:49:06.361042 kernel: /init May 14 23:49:06.361049 kernel: with environment: May 14 23:49:06.361056 kernel: HOME=/ May 14 23:49:06.361063 kernel: TERM=linux May 14 23:49:06.361070 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:49:06.361079 systemd[1]: Successfully made /usr/ read-only. May 14 23:49:06.361092 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:06.361100 systemd[1]: Detected virtualization microsoft. May 14 23:49:06.361108 systemd[1]: Detected architecture arm64. May 14 23:49:06.361115 systemd[1]: Running in initrd. May 14 23:49:06.361123 systemd[1]: No hostname configured, using default hostname. May 14 23:49:06.361131 systemd[1]: Hostname set to . May 14 23:49:06.361139 systemd[1]: Initializing machine ID from random generator. May 14 23:49:06.361146 systemd[1]: Queued start job for default target initrd.target. May 14 23:49:06.361156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:06.361163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:06.361172 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:49:06.361180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:06.361188 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:49:06.361197 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:49:06.361206 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:49:06.361216 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:49:06.361224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:06.361231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:06.361239 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:06.361247 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:06.361260 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:06.361268 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:06.361275 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:06.361285 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:06.361293 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:49:06.361301 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:49:06.361309 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:06.361317 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:06.361325 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:06.361333 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:06.361340 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:49:06.361348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:06.361357 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:49:06.361365 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:49:06.361373 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:06.361381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:06.361432 systemd-journald[218]: Collecting audit messages is disabled. May 14 23:49:06.361454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:06.361463 systemd-journald[218]: Journal started May 14 23:49:06.361482 systemd-journald[218]: Runtime Journal (/run/log/journal/4fd38b28f1694b3abd37639d24f840a2) is 8M, max 78.5M, 70.5M free. May 14 23:49:06.361846 systemd-modules-load[220]: Inserted module 'overlay' May 14 23:49:06.387995 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:06.388073 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:49:06.385219 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:49:06.410219 kernel: Bridge firewalling registered May 14 23:49:06.404559 systemd-modules-load[220]: Inserted module 'br_netfilter' May 14 23:49:06.406362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:06.423419 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:49:06.428351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:06.441219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:06.469208 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:06.484652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:06.501660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:49:06.518167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:06.535131 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:06.545636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:06.561853 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:49:06.575497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:06.606180 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:49:06.615157 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:06.645129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:06.668293 dracut-cmdline[251]: dracut-dracut-053 May 14 23:49:06.668293 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:06.667343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:06.682458 systemd-resolved[254]: Positive Trust Anchors: May 14 23:49:06.682468 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:06.682498 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:06.685452 systemd-resolved[254]: Defaulting to hostname 'linux'. May 14 23:49:06.716333 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:06.733274 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:06.837969 kernel: SCSI subsystem initialized May 14 23:49:06.846958 kernel: Loading iSCSI transport class v2.0-870. May 14 23:49:06.856970 kernel: iscsi: registered transport (tcp) May 14 23:49:06.875152 kernel: iscsi: registered transport (qla4xxx) May 14 23:49:06.875222 kernel: QLogic iSCSI HBA Driver May 14 23:49:06.909653 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:49:06.927292 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:49:06.958969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:49:06.959048 kernel: device-mapper: uevent: version 1.0.3 May 14 23:49:06.959068 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:49:07.013971 kernel: raid6: neonx8 gen() 15763 MB/s May 14 23:49:07.033960 kernel: raid6: neonx4 gen() 15817 MB/s May 14 23:49:07.053950 kernel: raid6: neonx2 gen() 13196 MB/s May 14 23:49:07.074950 kernel: raid6: neonx1 gen() 10501 MB/s May 14 23:49:07.094950 kernel: raid6: int64x8 gen() 6786 MB/s May 14 23:49:07.114950 kernel: raid6: int64x4 gen() 7359 MB/s May 14 23:49:07.135951 kernel: raid6: int64x2 gen() 6114 MB/s May 14 23:49:07.159433 kernel: raid6: int64x1 gen() 5058 MB/s May 14 23:49:07.159457 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s May 14 23:49:07.183586 kernel: raid6: .... xor() 12436 MB/s, rmw enabled May 14 23:49:07.183596 kernel: raid6: using neon recovery algorithm May 14 23:49:07.192952 kernel: xor: measuring software checksum speed May 14 23:49:07.199927 kernel: 8regs : 20428 MB/sec May 14 23:49:07.199947 kernel: 32regs : 21664 MB/sec May 14 23:49:07.203578 kernel: arm64_neon : 27936 MB/sec May 14 23:49:07.207936 kernel: xor: using function: arm64_neon (27936 MB/sec) May 14 23:49:07.259969 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:49:07.269891 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:07.286098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:07.311669 systemd-udevd[439]: Using default interface naming scheme 'v255'. May 14 23:49:07.317742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:07.336072 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:49:07.372071 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation May 14 23:49:07.416657 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:07.433196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:07.476300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:07.497192 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:49:07.520886 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:49:07.538094 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:07.552999 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:07.566351 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:07.588637 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:49:07.607300 kernel: hv_vmbus: Vmbus version:5.3 May 14 23:49:07.607323 kernel: hv_vmbus: registering driver hid_hyperv May 14 23:49:07.614274 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:07.689820 kernel: hv_vmbus: registering driver hyperv_keyboard May 14 23:49:07.689851 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 23:49:07.689870 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 23:49:07.689880 kernel: hv_vmbus: registering driver hv_netvsc May 14 23:49:07.689889 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 14 23:49:07.689899 kernel: PTP clock support registered May 14 23:49:07.689908 kernel: hv_vmbus: registering driver hv_storvsc May 14 23:49:07.689917 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 14 23:49:07.690107 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 14 23:49:07.690120 kernel: scsi host1: storvsc_host_t May 14 23:49:07.682131 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:07.715821 kernel: scsi host0: storvsc_host_t May 14 23:49:07.716069 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 14 23:49:07.682305 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:07.709534 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:07.723425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:07.767335 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 14 23:49:07.723694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:07.800482 kernel: hv_utils: Registering HyperV Utility Driver May 14 23:49:07.800510 kernel: hv_vmbus: registering driver hv_utils May 14 23:49:07.745629 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:07.764283 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:08.214709 kernel: hv_netvsc 000d3a6f-2db8-000d-3a6f-2db8000d3a6f eth0: VF slot 1 added May 14 23:49:08.214879 kernel: hv_utils: Heartbeat IC version 3.0 May 14 23:49:08.214890 kernel: hv_utils: Shutdown IC version 3.2 May 14 23:49:08.214908 kernel: hv_utils: TimeSync IC version 4.0 May 14 23:49:08.214919 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 14 23:49:08.215030 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:49:07.811191 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:08.203391 systemd-resolved[254]: Clock change detected. Flushing caches. May 14 23:49:08.251401 kernel: hv_vmbus: registering driver hv_pci May 14 23:49:08.251429 kernel: hv_pci 7f965df3-41f9-4a62-ad9d-0a7d073c36c8: PCI VMBus probing: Using version 0x10004 May 14 23:49:08.228048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:08.267035 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 14 23:49:08.267210 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 14 23:49:08.267301 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 14 23:49:08.276531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:08.499467 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 23:49:08.499738 kernel: hv_pci 7f965df3-41f9-4a62-ad9d-0a7d073c36c8: PCI host bridge to bus 41f9:00 May 14 23:49:08.499900 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 14 23:49:08.500049 kernel: pci_bus 41f9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 14 23:49:08.500165 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 14 23:49:08.500283 kernel: pci_bus 41f9:00: No busn resource found for root bus, will use [bus 00-ff] May 14 23:49:08.500396 kernel: pci 41f9:00:02.0: [15b3:1018] type 00 class 0x020000 May 14 23:49:08.500440 kernel: pci 41f9:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 14 23:49:08.504719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:08.504755 kernel: pci 41f9:00:02.0: enabling Extended Tags May 14 23:49:08.513843 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 23:49:08.529747 kernel: pci 41f9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 41f9:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 14 23:49:08.542701 kernel: pci_bus 41f9:00: busn_res: [bus 00-ff] end is updated to 00 May 14 23:49:08.542918 kernel: pci 41f9:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 14 23:49:08.585516 kernel: mlx5_core 41f9:00:02.0: enabling device (0000 -> 0002) May 14 23:49:08.591709 kernel: mlx5_core 41f9:00:02.0: firmware version: 16.31.2424 May 14 23:49:09.177414 kernel: hv_netvsc 000d3a6f-2db8-000d-3a6f-2db8000d3a6f eth0: VF registering: eth1 May 14 23:49:09.177633 kernel: mlx5_core 41f9:00:02.0 eth1: joined to eth0 May 14 23:49:09.186817 kernel: mlx5_core 41f9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 14 23:49:09.196718 kernel: mlx5_core 41f9:00:02.0 enP16889s1: renamed from eth1 May 14 23:49:09.315830 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (488) May 14 23:49:09.333570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 23:49:09.357584 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 14 23:49:09.399738 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (498) May 14 23:49:09.419536 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 14 23:49:09.434760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 14 23:49:09.441811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 14 23:49:09.484892 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:49:09.509720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:09.517728 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:10.527769 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:10.528527 disk-uuid[607]: The operation has completed successfully. May 14 23:49:10.592372 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:49:10.592481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:49:10.642834 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:49:10.657977 sh[693]: Success May 14 23:49:10.691759 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:49:10.903589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:49:10.912725 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:49:10.929886 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:49:10.955979 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:49:10.956039 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:10.963980 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:49:10.969134 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:49:10.973435 kernel: BTRFS info (device dm-0): using free space tree May 14 23:49:11.302482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:49:11.308050 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:49:11.328925 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:49:11.336921 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:49:11.379338 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:11.379401 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:11.384390 kernel: BTRFS info (device sda6): using free space tree May 14 23:49:11.413759 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:49:11.430775 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:11.436035 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:49:11.454274 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:49:11.505004 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:11.524914 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:11.556195 systemd-networkd[874]: lo: Link UP May 14 23:49:11.556200 systemd-networkd[874]: lo: Gained carrier May 14 23:49:11.559986 systemd-networkd[874]: Enumeration completed May 14 23:49:11.560315 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:11.566641 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:11.566645 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:11.567135 systemd[1]: Reached target network.target - Network. May 14 23:49:11.630715 kernel: mlx5_core 41f9:00:02.0 enP16889s1: Link up May 14 23:49:11.713835 kernel: hv_netvsc 000d3a6f-2db8-000d-3a6f-2db8000d3a6f eth0: Data path switched to VF: enP16889s1 May 14 23:49:11.714252 systemd-networkd[874]: enP16889s1: Link UP May 14 23:49:11.714332 systemd-networkd[874]: eth0: Link UP May 14 23:49:11.714453 systemd-networkd[874]: eth0: Gained carrier May 14 23:49:11.714461 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:11.726725 systemd-networkd[874]: enP16889s1: Gained carrier May 14 23:49:11.752748 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:12.277719 ignition[819]: Ignition 2.20.0 May 14 23:49:12.277732 ignition[819]: Stage: fetch-offline May 14 23:49:12.282998 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:12.277771 ignition[819]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:12.277779 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:12.277872 ignition[819]: parsed url from cmdline: "" May 14 23:49:12.304954 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 23:49:12.277876 ignition[819]: no config URL provided May 14 23:49:12.277881 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:49:12.277888 ignition[819]: no config at "/usr/lib/ignition/user.ign" May 14 23:49:12.277893 ignition[819]: failed to fetch config: resource requires networking May 14 23:49:12.278077 ignition[819]: Ignition finished successfully May 14 23:49:12.335012 ignition[884]: Ignition 2.20.0 May 14 23:49:12.335019 ignition[884]: Stage: fetch May 14 23:49:12.335205 ignition[884]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:12.335214 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:12.335302 ignition[884]: parsed url from cmdline: "" May 14 23:49:12.335305 ignition[884]: no config URL provided May 14 23:49:12.335310 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:49:12.335317 ignition[884]: no config at "/usr/lib/ignition/user.ign" May 14 23:49:12.335343 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 14 23:49:12.476354 ignition[884]: GET result: OK May 14 23:49:12.476415 ignition[884]: config has been read from IMDS userdata May 14 23:49:12.476459 ignition[884]: parsing config with SHA512: 6b6ce895e3a804051f6a79e37c6372555d4e38d7d68fa7f16407cd76df828a9383d09c30f098da8ab70df87ac9ae9e13133e62f0d286c975583e1191c475d773 May 14 23:49:12.480824 unknown[884]: fetched base config from "system" May 14 23:49:12.481204 ignition[884]: fetch: fetch complete May 14 23:49:12.480831 unknown[884]: fetched base config from "system" May 14 23:49:12.481209 ignition[884]: fetch: fetch passed May 14 23:49:12.480836 unknown[884]: fetched user config from "azure" May 14 23:49:12.481254 ignition[884]: Ignition finished successfully May 14 23:49:12.486284 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 23:49:12.511525 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:49:12.541206 ignition[890]: Ignition 2.20.0 May 14 23:49:12.541219 ignition[890]: Stage: kargs May 14 23:49:12.546733 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:49:12.541389 ignition[890]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:12.541399 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:12.542392 ignition[890]: kargs: kargs passed May 14 23:49:12.542442 ignition[890]: Ignition finished successfully May 14 23:49:12.574995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:49:12.598182 ignition[897]: Ignition 2.20.0 May 14 23:49:12.598194 ignition[897]: Stage: disks May 14 23:49:12.598374 ignition[897]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:12.603213 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:49:12.598383 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:12.611405 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:49:12.599380 ignition[897]: disks: disks passed May 14 23:49:12.618210 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:49:12.599443 ignition[897]: Ignition finished successfully May 14 23:49:12.630960 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:12.642858 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:12.655023 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:12.687908 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:49:12.756376 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 14 23:49:12.760952 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:49:12.787897 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:49:12.845711 kernel: EXT4-fs (sda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:49:12.846189 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:49:12.851109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:49:12.901777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:12.908825 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:49:12.919905 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 23:49:12.935172 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:49:12.935228 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:12.947954 systemd-networkd[874]: enP16889s1: Gained IPv6LL May 14 23:49:12.972026 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:49:12.993180 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (917) May 14 23:49:12.993206 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:12.999263 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:13.003684 kernel: BTRFS info (device sda6): using free space tree May 14 23:49:13.010098 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:49:13.025575 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:49:13.026348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:13.587778 systemd-networkd[874]: eth0: Gained IPv6LL May 14 23:49:13.651093 coreos-metadata[919]: May 14 23:49:13.651 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 23:49:13.661095 coreos-metadata[919]: May 14 23:49:13.661 INFO Fetch successful May 14 23:49:13.666626 coreos-metadata[919]: May 14 23:49:13.666 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 14 23:49:13.692144 coreos-metadata[919]: May 14 23:49:13.692 INFO Fetch successful May 14 23:49:13.698144 coreos-metadata[919]: May 14 23:49:13.693 INFO wrote hostname ci-4230.1.1-n-00beb67e77 to /sysroot/etc/hostname May 14 23:49:13.698575 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:49:13.919842 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:49:13.954887 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory May 14 23:49:13.964361 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:49:13.970918 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:49:14.678546 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:49:14.692967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:49:14.702925 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:49:14.720896 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:49:14.730333 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:14.753529 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:49:14.762379 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:49:14.777950 ignition[1035]: INFO : Ignition 2.20.0 May 14 23:49:14.777950 ignition[1035]: INFO : Stage: mount May 14 23:49:14.777950 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:14.777950 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:14.777950 ignition[1035]: INFO : mount: mount passed May 14 23:49:14.777950 ignition[1035]: INFO : Ignition finished successfully May 14 23:49:14.782911 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:49:14.800937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:14.851083 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) May 14 23:49:14.851116 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:14.851128 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:14.851138 kernel: BTRFS info (device sda6): using free space tree May 14 23:49:14.861706 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:49:14.863685 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:14.889772 ignition[1064]: INFO : Ignition 2.20.0 May 14 23:49:14.889772 ignition[1064]: INFO : Stage: files May 14 23:49:14.897703 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:14.897703 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:14.897703 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping May 14 23:49:14.915951 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:49:14.915951 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:49:14.961815 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:49:14.970429 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:49:14.970429 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:49:14.962212 unknown[1064]: wrote ssh authorized keys file for user: core May 14 23:49:14.990540 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:49:14.990540 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 23:49:15.049960 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:49:15.217762 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:49:15.217762 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:15.240907 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 23:49:15.665726 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:49:15.736860 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:15.747854 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 23:49:16.017219 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:49:16.234781 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:16.234781 ignition[1064]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:49:16.263723 ignition[1064]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:16.275125 ignition[1064]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:16.275125 ignition[1064]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:49:16.275125 ignition[1064]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 14 23:49:16.275125 ignition[1064]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:49:16.275125 ignition[1064]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:16.275125 ignition[1064]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:16.275125 ignition[1064]: INFO : files: files passed May 14 23:49:16.275125 ignition[1064]: INFO : Ignition finished successfully May 14 23:49:16.275846 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:49:16.314960 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:49:16.332907 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:49:16.386197 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:16.386197 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:16.358036 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:49:16.422848 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:16.363403 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:49:16.386596 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:16.402237 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:49:16.422930 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:49:16.473735 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:49:16.473867 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:49:16.487911 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:49:16.500463 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:49:16.511368 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:49:16.519831 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:49:16.547091 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:16.563121 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:49:16.581157 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:16.591682 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:16.605091 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:49:16.617250 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:49:16.617376 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:16.634077 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:49:16.640246 systemd[1]: Stopped target basic.target - Basic System. May 14 23:49:16.652607 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:49:16.665133 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:16.677219 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:49:16.690001 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:49:16.702386 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:16.714988 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:49:16.726539 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:49:16.741155 systemd[1]: Stopped target swap.target - Swaps. May 14 23:49:16.750993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:49:16.751121 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:16.766178 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:16.772658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:16.784688 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:49:16.784767 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:16.797070 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:49:16.797190 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:49:16.815389 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:49:16.815551 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:16.827473 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:49:16.827579 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:49:16.897202 ignition[1117]: INFO : Ignition 2.20.0 May 14 23:49:16.897202 ignition[1117]: INFO : Stage: umount May 14 23:49:16.897202 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:16.897202 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:16.897202 ignition[1117]: INFO : umount: umount passed May 14 23:49:16.897202 ignition[1117]: INFO : Ignition finished successfully May 14 23:49:16.840329 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 23:49:16.840438 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:49:16.865978 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:49:16.879561 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:49:16.879774 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:16.912920 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:49:16.918062 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:49:16.918231 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:16.938395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:49:16.938523 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:16.955273 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:49:16.956338 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:49:16.956448 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:49:16.965550 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:49:16.965684 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:49:16.977820 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:49:16.977888 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:49:16.992986 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:49:16.993056 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:49:17.004562 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 23:49:17.004614 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 23:49:17.015076 systemd[1]: Stopped target network.target - Network. May 14 23:49:17.026429 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:49:17.026503 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:17.038748 systemd[1]: Stopped target paths.target - Path Units. May 14 23:49:17.049073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:49:17.052723 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:17.062301 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:49:17.073245 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:49:17.084982 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:49:17.085031 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:17.094950 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:49:17.094980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:17.106206 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:49:17.106265 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:49:17.119401 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:49:17.119449 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:49:17.130336 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:49:17.142558 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:49:17.159498 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:49:17.159750 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:49:17.176939 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:49:17.177169 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:49:17.177290 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:49:17.194835 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:49:17.195546 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:49:17.195611 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:17.452008 kernel: hv_netvsc 000d3a6f-2db8-000d-3a6f-2db8000d3a6f eth0: Data path switched from VF: enP16889s1 May 14 23:49:17.222889 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:49:17.233518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:49:17.233635 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:17.254104 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:49:17.254166 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:17.271022 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:49:17.271082 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:49:17.278143 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:49:17.278194 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:17.298737 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:17.311209 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:49:17.311288 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:17.346169 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:49:17.346364 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:17.359883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:49:17.359963 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:49:17.370987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:49:17.371033 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:17.383639 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:49:17.383714 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:17.402414 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:49:17.402483 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:49:17.435712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:17.435790 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:17.457964 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:49:17.472779 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:49:17.472851 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:17.493258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:17.493327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:17.506216 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:49:17.506286 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:17.506613 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:49:17.506757 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:49:17.516965 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:49:17.517048 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:49:17.737222 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 14 23:49:17.528166 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:49:17.528249 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:49:17.540903 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:49:17.553261 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:49:17.553366 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:49:17.585994 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:49:17.622675 systemd[1]: Switching root. May 14 23:49:17.776278 systemd-journald[218]: Journal stopped May 14 23:49:22.960221 kernel: mlx5_core 41f9:00:02.0: poll_health:835:(pid 0): device's health compromised - reached miss count May 14 23:49:22.960250 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:49:22.960263 kernel: SELinux: policy capability open_perms=1 May 14 23:49:22.960271 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:49:22.960279 kernel: SELinux: policy capability always_check_network=0 May 14 23:49:22.960286 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:49:22.960295 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:49:22.960303 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:49:22.960311 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:49:22.960318 kernel: audit: type=1403 audit(1747266558.653:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:49:22.960330 systemd[1]: Successfully loaded SELinux policy in 131.849ms. May 14 23:49:22.960341 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.611ms. May 14 23:49:22.960352 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:22.960361 systemd[1]: Detected virtualization microsoft. May 14 23:49:22.960372 systemd[1]: Detected architecture arm64. May 14 23:49:22.960381 systemd[1]: Detected first boot. May 14 23:49:22.960390 systemd[1]: Hostname set to . May 14 23:49:22.960399 systemd[1]: Initializing machine ID from random generator. May 14 23:49:22.960408 zram_generator::config[1161]: No configuration found. May 14 23:49:22.960417 kernel: NET: Registered PF_VSOCK protocol family May 14 23:49:22.960426 systemd[1]: Populated /etc with preset unit settings. May 14 23:49:22.960437 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:49:22.960446 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:49:22.960455 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:49:22.960463 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:49:22.960472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:49:22.960482 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:49:22.960491 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:49:22.960502 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:49:22.960511 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:49:22.960520 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:49:22.960529 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:49:22.960539 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:49:22.960548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:22.960558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:22.960567 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:49:22.960577 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:49:22.960587 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:49:22.960597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:22.960606 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:49:22.960617 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:22.960627 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:49:22.960636 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:49:22.960645 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:49:22.960656 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:49:22.960665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:22.960674 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:22.960683 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:22.960705 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:22.960715 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:49:22.960724 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:49:22.960734 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:49:22.960745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:22.960760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:22.960769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:22.960779 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:49:22.960788 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:49:22.960799 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:49:22.960809 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:49:22.960818 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:49:22.960827 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:49:22.960837 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:49:22.960847 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:49:22.960856 systemd[1]: Reached target machines.target - Containers. May 14 23:49:22.960866 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:49:22.960877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:22.960886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:22.960896 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:49:22.960905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:22.960915 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:22.960924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:22.960934 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:49:22.960943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:22.960954 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:49:22.960964 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:49:22.960974 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:49:22.960984 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:49:22.960993 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:49:22.961002 kernel: fuse: init (API version 7.39) May 14 23:49:22.961011 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:22.961021 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:22.961031 kernel: loop: module loaded May 14 23:49:22.961040 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:22.961049 kernel: ACPI: bus type drm_connector registered May 14 23:49:22.961059 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:49:22.961068 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:49:22.961078 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:49:22.961087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:22.961115 systemd-journald[1259]: Collecting audit messages is disabled. May 14 23:49:22.961138 systemd-journald[1259]: Journal started May 14 23:49:22.961158 systemd-journald[1259]: Runtime Journal (/run/log/journal/cb553bec406d4ebda03f8f8ba5f6d8c9) is 8M, max 78.5M, 70.5M free. May 14 23:49:21.608347 systemd[1]: Queued start job for default target multi-user.target. May 14 23:49:21.616582 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 23:49:21.617006 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:49:21.617363 systemd[1]: systemd-journald.service: Consumed 3.377s CPU time. May 14 23:49:22.974534 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:49:22.974606 systemd[1]: Stopped verity-setup.service. May 14 23:49:22.993309 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:22.996784 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:49:23.002724 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:49:23.009033 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:49:23.014545 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:49:23.020612 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:49:23.026925 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:49:23.032498 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:49:23.039297 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:23.048561 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:49:23.048959 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:49:23.055950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:23.056132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:23.063839 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:23.063995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:23.070662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:23.070842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:23.078191 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:49:23.078349 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:49:23.084782 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:23.084954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:23.091724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:23.098672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:49:23.106313 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:49:23.113832 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:49:23.121405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:23.136729 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:49:23.150798 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:49:23.157985 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:49:23.164219 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:49:23.164262 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:23.171037 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:49:23.185851 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:49:23.193495 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:49:23.199493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:23.228846 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:49:23.236018 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:49:23.242314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:23.243448 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:49:23.250300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:23.251868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:23.260876 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:49:23.275989 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:49:23.284883 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:49:23.298327 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:49:23.305500 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:49:23.308655 systemd-journald[1259]: Time spent on flushing to /var/log/journal/cb553bec406d4ebda03f8f8ba5f6d8c9 is 12.763ms for 914 entries. May 14 23:49:23.308655 systemd-journald[1259]: System Journal (/var/log/journal/cb553bec406d4ebda03f8f8ba5f6d8c9) is 8M, max 2.6G, 2.6G free. May 14 23:49:23.351544 systemd-journald[1259]: Received client request to flush runtime journal. May 14 23:49:23.319050 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:49:23.326545 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:49:23.336882 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:49:23.348018 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:49:23.357729 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:49:23.374836 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 23:49:23.376741 kernel: loop0: detected capacity change from 0 to 123192 May 14 23:49:23.401681 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:49:23.404744 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:49:23.433158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:23.607737 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:49:23.620928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:23.758753 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. May 14 23:49:23.758770 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. May 14 23:49:23.763729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:23.781717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:49:23.819727 kernel: loop1: detected capacity change from 0 to 28720 May 14 23:49:24.186730 kernel: loop2: detected capacity change from 0 to 189592 May 14 23:49:24.234743 kernel: loop3: detected capacity change from 0 to 113512 May 14 23:49:24.354554 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:49:24.369912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:24.406008 systemd-udevd[1327]: Using default interface naming scheme 'v255'. May 14 23:49:24.543639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:24.567989 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:24.592735 kernel: loop4: detected capacity change from 0 to 123192 May 14 23:49:24.604822 kernel: loop5: detected capacity change from 0 to 28720 May 14 23:49:24.617936 kernel: loop6: detected capacity change from 0 to 189592 May 14 23:49:24.638926 kernel: loop7: detected capacity change from 0 to 113512 May 14 23:49:24.633124 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:49:24.638818 (sd-merge)[1351]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 14 23:49:24.639277 (sd-merge)[1351]: Merged extensions into '/usr'. May 14 23:49:24.659874 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:49:24.660048 systemd[1]: Reloading... May 14 23:49:24.779854 zram_generator::config[1386]: No configuration found. May 14 23:49:24.779943 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:49:24.899333 kernel: hv_vmbus: registering driver hv_balloon May 14 23:49:24.899426 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 14 23:49:24.910733 kernel: hv_balloon: Memory hot add disabled on ARM64 May 14 23:49:24.942366 kernel: hv_vmbus: registering driver hyperv_fb May 14 23:49:24.942457 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 14 23:49:24.953277 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 14 23:49:24.960255 kernel: Console: switching to colour dummy device 80x25 May 14 23:49:24.966711 kernel: Console: switching to colour frame buffer device 128x48 May 14 23:49:24.997430 systemd-networkd[1343]: lo: Link UP May 14 23:49:24.998975 systemd-networkd[1343]: lo: Gained carrier May 14 23:49:25.004178 systemd-networkd[1343]: Enumeration completed May 14 23:49:25.005970 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:25.006954 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:25.007935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:25.040899 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1350) May 14 23:49:25.073717 kernel: mlx5_core 41f9:00:02.0 enP16889s1: Link up May 14 23:49:25.117745 kernel: hv_netvsc 000d3a6f-2db8-000d-3a6f-2db8000d3a6f eth0: Data path switched to VF: enP16889s1 May 14 23:49:25.119256 systemd-networkd[1343]: enP16889s1: Link UP May 14 23:49:25.119360 systemd-networkd[1343]: eth0: Link UP May 14 23:49:25.119363 systemd-networkd[1343]: eth0: Gained carrier May 14 23:49:25.119377 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:25.124074 systemd-networkd[1343]: enP16889s1: Gained carrier May 14 23:49:25.129740 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:25.193585 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:49:25.194006 systemd[1]: Reloading finished in 533 ms. May 14 23:49:25.209829 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:49:25.216017 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:25.223743 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:49:25.266100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 23:49:25.286994 systemd[1]: Starting ensure-sysext.service... May 14 23:49:25.295018 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:49:25.305143 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:49:25.314776 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:49:25.326064 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:25.337810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:25.357807 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:49:25.358015 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:49:25.358642 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:49:25.358907 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. May 14 23:49:25.358960 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. May 14 23:49:25.362800 systemd[1]: Reload requested from client PID 1519 ('systemctl') (unit ensure-sysext.service)... May 14 23:49:25.362823 systemd[1]: Reloading... May 14 23:49:25.410328 systemd-tmpfiles[1523]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:25.410494 systemd-tmpfiles[1523]: Skipping /boot May 14 23:49:25.421232 systemd-tmpfiles[1523]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:25.421825 systemd-tmpfiles[1523]: Skipping /boot May 14 23:49:25.451763 zram_generator::config[1565]: No configuration found. May 14 23:49:25.558142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:25.656328 systemd[1]: Reloading finished in 293 ms. May 14 23:49:25.669748 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:49:25.689741 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:49:25.697183 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:49:25.704382 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:25.711980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:25.731943 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:25.738449 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:49:25.747059 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:49:25.758677 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:49:25.768956 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:25.780944 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:49:25.792207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:25.798085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:25.808881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:25.825807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:25.836106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:25.837578 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:25.842018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:25.843731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:25.850510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:25.850690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:25.858265 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:25.858474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:25.870941 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:49:25.882086 lvm[1627]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:25.888383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:25.896062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:25.911177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:25.918537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:25.926078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:25.926227 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:25.928453 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:49:25.935686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:25.935877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:25.944502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:25.944681 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:25.953364 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:49:25.961074 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:25.962737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:25.977517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:25.983866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:25.988954 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:49:25.993720 lvm[1665]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:25.998086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:26.010008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:26.011000 augenrules[1669]: No rules May 14 23:49:26.022094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:26.032083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:26.036966 systemd-resolved[1630]: Positive Trust Anchors: May 14 23:49:26.036984 systemd-resolved[1630]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:26.037016 systemd-resolved[1630]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:26.040027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:26.040184 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:26.040338 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:49:26.047274 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:26.048748 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:26.054636 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:49:26.062428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:26.062597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:26.073994 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:26.074160 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:26.080793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:26.082403 systemd-resolved[1630]: Using system hostname 'ci-4230.1.1-n-00beb67e77'. May 14 23:49:26.082767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:26.090188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:26.097026 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:26.097206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:26.109765 systemd[1]: Finished ensure-sysext.service. May 14 23:49:26.117319 systemd[1]: Reached target network.target - Network. May 14 23:49:26.122721 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:26.129604 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:26.129709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:26.707834 systemd-networkd[1343]: enP16889s1: Gained IPv6LL May 14 23:49:26.988668 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:49:26.996654 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:49:27.091831 systemd-networkd[1343]: eth0: Gained IPv6LL May 14 23:49:27.094024 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:49:27.101288 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:49:29.531534 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:49:29.541299 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:49:29.553947 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:49:29.568564 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:49:29.575017 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:29.580842 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:49:29.587812 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:49:29.596755 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:49:29.602683 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:49:29.609688 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:49:29.616560 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:49:29.616595 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:29.621659 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:29.627843 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:49:29.635366 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:49:29.643070 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:49:29.650445 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:49:29.657936 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:49:29.674421 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:49:29.680501 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:49:29.687501 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:49:29.693363 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:29.698499 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:29.703870 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:29.703898 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:29.714831 systemd[1]: Starting chronyd.service - NTP client/server... May 14 23:49:29.723253 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:49:29.736890 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 23:49:29.747785 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:49:29.754767 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:49:29.760473 (chronyd)[1690]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 14 23:49:29.769168 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:49:29.776553 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:49:29.776598 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 14 23:49:29.779532 jq[1697]: false May 14 23:49:29.779907 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 14 23:49:29.785778 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 14 23:49:29.787150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:29.795674 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:49:29.803931 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:49:29.805350 KVP[1699]: KVP starting; pid is:1699 May 14 23:49:29.815959 KVP[1699]: KVP LIC Version: 3.1 May 14 23:49:29.816400 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:49:29.822117 kernel: hv_utils: KVP IC version 4.0 May 14 23:49:29.819889 chronyd[1707]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 14 23:49:29.830284 extend-filesystems[1698]: Found loop4 May 14 23:49:29.847157 extend-filesystems[1698]: Found loop5 May 14 23:49:29.847157 extend-filesystems[1698]: Found loop6 May 14 23:49:29.847157 extend-filesystems[1698]: Found loop7 May 14 23:49:29.847157 extend-filesystems[1698]: Found sda May 14 23:49:29.847157 extend-filesystems[1698]: Found sda1 May 14 23:49:29.847157 extend-filesystems[1698]: Found sda2 May 14 23:49:29.847157 extend-filesystems[1698]: Found sda3 May 14 23:49:29.847157 extend-filesystems[1698]: Found usr May 14 23:49:29.847157 extend-filesystems[1698]: Found sda4 May 14 23:49:29.847157 extend-filesystems[1698]: Found sda6 May 14 23:49:29.847157 extend-filesystems[1698]: Found sda7 May 14 23:49:29.847157 extend-filesystems[1698]: Found sda9 May 14 23:49:29.847157 extend-filesystems[1698]: Checking size of /dev/sda9 May 14 23:49:29.831906 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:49:29.865123 chronyd[1707]: Timezone right/UTC failed leap second check, ignoring May 14 23:49:30.043937 extend-filesystems[1698]: Old size kept for /dev/sda9 May 14 23:49:30.043937 extend-filesystems[1698]: Found sr0 May 14 23:49:29.858018 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:49:30.118813 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1757) May 14 23:49:29.865311 chronyd[1707]: Loaded seccomp filter (level 2) May 14 23:49:30.118913 coreos-metadata[1692]: May 14 23:49:30.080 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 23:49:30.118913 coreos-metadata[1692]: May 14 23:49:30.091 INFO Fetch successful May 14 23:49:30.118913 coreos-metadata[1692]: May 14 23:49:30.092 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 14 23:49:30.118913 coreos-metadata[1692]: May 14 23:49:30.110 INFO Fetch successful May 14 23:49:30.118913 coreos-metadata[1692]: May 14 23:49:30.111 INFO Fetching http://168.63.129.16/machine/18c13d86-b47b-4044-b7db-10bb275fc2d5/4d859b75%2D5b08%2D4a40%2Db737%2D7b968e49733c.%5Fci%2D4230.1.1%2Dn%2D00beb67e77?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 14 23:49:30.118913 coreos-metadata[1692]: May 14 23:49:30.113 INFO Fetch successful May 14 23:49:30.118913 coreos-metadata[1692]: May 14 23:49:30.113 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 14 23:49:29.871833 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:49:29.930228 dbus-daemon[1693]: [system] SELinux support is enabled May 14 23:49:29.888598 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:49:30.127456 update_engine[1726]: I20250514 23:49:29.991833 1726 main.cc:92] Flatcar Update Engine starting May 14 23:49:30.127456 update_engine[1726]: I20250514 23:49:29.999624 1726 update_check_scheduler.cc:74] Next update check in 3m34s May 14 23:49:29.889193 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:49:30.129247 jq[1728]: true May 14 23:49:30.129407 coreos-metadata[1692]: May 14 23:49:30.127 INFO Fetch successful May 14 23:49:29.895897 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:49:29.905262 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:49:29.922248 systemd[1]: Started chronyd.service - NTP client/server. May 14 23:49:29.932277 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:49:30.136633 jq[1743]: true May 14 23:49:29.952273 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:49:29.952473 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:49:29.952773 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:49:29.952942 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:49:29.977574 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:49:29.977802 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:49:29.996063 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:49:30.015110 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:49:30.015318 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:49:30.064141 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:49:30.064170 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:49:30.066142 (ntainerd)[1746]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:49:30.084213 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:49:30.084235 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:49:30.121073 systemd[1]: Started update-engine.service - Update Engine. May 14 23:49:30.131336 systemd-logind[1717]: New seat seat0. May 14 23:49:30.146421 tar[1739]: linux-arm64/helm May 14 23:49:30.148607 systemd-logind[1717]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 14 23:49:30.150884 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:49:30.160239 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:49:30.267853 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 23:49:30.281513 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:49:30.314714 bash[1791]: Updated "/home/core/.ssh/authorized_keys" May 14 23:49:30.315468 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:49:30.326996 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:49:30.563783 locksmithd[1768]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:49:30.824537 containerd[1746]: time="2025-05-14T23:49:30.824422920Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:49:30.899439 containerd[1746]: time="2025-05-14T23:49:30.899103360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:30.903705 containerd[1746]: time="2025-05-14T23:49:30.902776240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:30.903705 containerd[1746]: time="2025-05-14T23:49:30.902825720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:49:30.903705 containerd[1746]: time="2025-05-14T23:49:30.902844800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:49:30.903705 containerd[1746]: time="2025-05-14T23:49:30.903459280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:49:30.903705 containerd[1746]: time="2025-05-14T23:49:30.903486120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:49:30.903895 containerd[1746]: time="2025-05-14T23:49:30.903820280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:30.903895 containerd[1746]: time="2025-05-14T23:49:30.903846880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:30.904266 containerd[1746]: time="2025-05-14T23:49:30.904234840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:30.904266 containerd[1746]: time="2025-05-14T23:49:30.904262160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:49:30.904315 containerd[1746]: time="2025-05-14T23:49:30.904279480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:30.904315 containerd[1746]: time="2025-05-14T23:49:30.904289520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:49:30.904399 containerd[1746]: time="2025-05-14T23:49:30.904380440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:30.904611 containerd[1746]: time="2025-05-14T23:49:30.904591040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:30.906849 containerd[1746]: time="2025-05-14T23:49:30.905569160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:30.906849 containerd[1746]: time="2025-05-14T23:49:30.905757360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:49:30.906849 containerd[1746]: time="2025-05-14T23:49:30.906036520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:49:30.906849 containerd[1746]: time="2025-05-14T23:49:30.906106920Z" level=info msg="metadata content store policy set" policy=shared May 14 23:49:30.921015 tar[1739]: linux-arm64/LICENSE May 14 23:49:30.921015 tar[1739]: linux-arm64/README.md May 14 23:49:30.921645 containerd[1746]: time="2025-05-14T23:49:30.921480640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:49:30.921645 containerd[1746]: time="2025-05-14T23:49:30.921554080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:49:30.924974 containerd[1746]: time="2025-05-14T23:49:30.924736560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:49:30.924974 containerd[1746]: time="2025-05-14T23:49:30.924813080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:49:30.924974 containerd[1746]: time="2025-05-14T23:49:30.924832320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:49:30.926925 containerd[1746]: time="2025-05-14T23:49:30.926882840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:49:30.927391 containerd[1746]: time="2025-05-14T23:49:30.927363520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:49:30.928059 containerd[1746]: time="2025-05-14T23:49:30.928031240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:49:30.928098 containerd[1746]: time="2025-05-14T23:49:30.928061600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:49:30.928134 containerd[1746]: time="2025-05-14T23:49:30.928090880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:49:30.928134 containerd[1746]: time="2025-05-14T23:49:30.928118800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928171 containerd[1746]: time="2025-05-14T23:49:30.928133360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928171 containerd[1746]: time="2025-05-14T23:49:30.928146400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928171 containerd[1746]: time="2025-05-14T23:49:30.928162920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928227 containerd[1746]: time="2025-05-14T23:49:30.928187680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928227 containerd[1746]: time="2025-05-14T23:49:30.928204760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928227 containerd[1746]: time="2025-05-14T23:49:30.928218200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928280 containerd[1746]: time="2025-05-14T23:49:30.928232640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:49:30.928280 containerd[1746]: time="2025-05-14T23:49:30.928271560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928314 containerd[1746]: time="2025-05-14T23:49:30.928286840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928314 containerd[1746]: time="2025-05-14T23:49:30.928299920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928355 containerd[1746]: time="2025-05-14T23:49:30.928314760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928355 containerd[1746]: time="2025-05-14T23:49:30.928338000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928396 containerd[1746]: time="2025-05-14T23:49:30.928354560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928396 containerd[1746]: time="2025-05-14T23:49:30.928369000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928396 containerd[1746]: time="2025-05-14T23:49:30.928383040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928445 containerd[1746]: time="2025-05-14T23:49:30.928397320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928445 containerd[1746]: time="2025-05-14T23:49:30.928422640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928445 containerd[1746]: time="2025-05-14T23:49:30.928435560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928498 containerd[1746]: time="2025-05-14T23:49:30.928448040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928498 containerd[1746]: time="2025-05-14T23:49:30.928464400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928498 containerd[1746]: time="2025-05-14T23:49:30.928479880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:49:30.928547 containerd[1746]: time="2025-05-14T23:49:30.928516120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928547 containerd[1746]: time="2025-05-14T23:49:30.928530560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:49:30.928547 containerd[1746]: time="2025-05-14T23:49:30.928540760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.928615840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.928647800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.928660320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.928679760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.930069680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.930116520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.930268120Z" level=info msg="NRI interface is disabled by configuration." May 14 23:49:30.931657 containerd[1746]: time="2025-05-14T23:49:30.930290040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.930649920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.930744280Z" level=info msg="Connect containerd service" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.930794760Z" level=info msg="using legacy CRI server" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.930816560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.931004040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.932940760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933496720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933545680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933581960Z" level=info msg="Start subscribing containerd event" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933628240Z" level=info msg="Start recovering state" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933719200Z" level=info msg="Start event monitor" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933732600Z" level=info msg="Start snapshots syncer" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933745800Z" level=info msg="Start cni network conf syncer for default" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933754000Z" level=info msg="Start streaming server" May 14 23:49:30.934815 containerd[1746]: time="2025-05-14T23:49:30.933827920Z" level=info msg="containerd successfully booted in 0.114152s" May 14 23:49:30.933908 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:49:30.941531 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:49:31.013710 sshd_keygen[1718]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:49:31.043608 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:49:31.061464 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:49:31.068895 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 14 23:49:31.077925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:31.086840 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:31.087334 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:49:31.089491 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:49:31.113169 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:49:31.128200 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 14 23:49:31.140825 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:49:31.155632 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:49:31.163652 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:49:31.173105 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:49:31.179052 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:49:31.186060 systemd[1]: Startup finished in 680ms (kernel) + 12.413s (initrd) + 12.663s (userspace) = 25.757s. May 14 23:49:31.536153 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:31.541994 login[1883]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:31.549542 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:49:31.559095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:49:31.565844 kubelet[1871]: E0514 23:49:31.561546 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:31.568758 systemd-logind[1717]: New session 1 of user core. May 14 23:49:31.569843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:31.569984 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:31.570234 systemd[1]: kubelet.service: Consumed 681ms CPU time, 234.5M memory peak. May 14 23:49:31.574290 systemd-logind[1717]: New session 2 of user core. May 14 23:49:31.579863 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:49:31.589017 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:49:31.606939 (systemd)[1897]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:49:31.609450 systemd-logind[1717]: New session c1 of user core. May 14 23:49:31.785440 systemd[1897]: Queued start job for default target default.target. May 14 23:49:31.792675 systemd[1897]: Created slice app.slice - User Application Slice. May 14 23:49:31.792730 systemd[1897]: Reached target paths.target - Paths. May 14 23:49:31.792773 systemd[1897]: Reached target timers.target - Timers. May 14 23:49:31.794435 systemd[1897]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:49:31.806082 systemd[1897]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:49:31.806211 systemd[1897]: Reached target sockets.target - Sockets. May 14 23:49:31.806256 systemd[1897]: Reached target basic.target - Basic System. May 14 23:49:31.806284 systemd[1897]: Reached target default.target - Main User Target. May 14 23:49:31.806309 systemd[1897]: Startup finished in 189ms. May 14 23:49:31.806759 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:49:31.817934 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:49:31.818786 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:49:33.089727 waagent[1879]: 2025-05-14T23:49:33.085783Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 14 23:49:33.091894 waagent[1879]: 2025-05-14T23:49:33.091812Z INFO Daemon Daemon OS: flatcar 4230.1.1 May 14 23:49:33.096622 waagent[1879]: 2025-05-14T23:49:33.096557Z INFO Daemon Daemon Python: 3.11.11 May 14 23:49:33.101490 waagent[1879]: 2025-05-14T23:49:33.101420Z INFO Daemon Daemon Run daemon May 14 23:49:33.105639 waagent[1879]: 2025-05-14T23:49:33.105584Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.1' May 14 23:49:33.114954 waagent[1879]: 2025-05-14T23:49:33.114876Z INFO Daemon Daemon Using waagent for provisioning May 14 23:49:33.120501 waagent[1879]: 2025-05-14T23:49:33.120448Z INFO Daemon Daemon Activate resource disk May 14 23:49:33.127555 waagent[1879]: 2025-05-14T23:49:33.127492Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 14 23:49:33.141519 waagent[1879]: 2025-05-14T23:49:33.141431Z INFO Daemon Daemon Found device: None May 14 23:49:33.146385 waagent[1879]: 2025-05-14T23:49:33.146318Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 14 23:49:33.155232 waagent[1879]: 2025-05-14T23:49:33.155165Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 14 23:49:33.168154 waagent[1879]: 2025-05-14T23:49:33.168102Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 23:49:33.174120 waagent[1879]: 2025-05-14T23:49:33.174059Z INFO Daemon Daemon Running default provisioning handler May 14 23:49:33.186402 waagent[1879]: 2025-05-14T23:49:33.186313Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 14 23:49:33.200518 waagent[1879]: 2025-05-14T23:49:33.200441Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 14 23:49:33.210514 waagent[1879]: 2025-05-14T23:49:33.210444Z INFO Daemon Daemon cloud-init is enabled: False May 14 23:49:33.215664 waagent[1879]: 2025-05-14T23:49:33.215603Z INFO Daemon Daemon Copying ovf-env.xml May 14 23:49:33.327830 waagent[1879]: 2025-05-14T23:49:33.327722Z INFO Daemon Daemon Successfully mounted dvd May 14 23:49:33.358805 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 14 23:49:33.361874 waagent[1879]: 2025-05-14T23:49:33.361784Z INFO Daemon Daemon Detect protocol endpoint May 14 23:49:33.366981 waagent[1879]: 2025-05-14T23:49:33.366913Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 23:49:33.373527 waagent[1879]: 2025-05-14T23:49:33.373460Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 14 23:49:33.381126 waagent[1879]: 2025-05-14T23:49:33.381060Z INFO Daemon Daemon Test for route to 168.63.129.16 May 14 23:49:33.386857 waagent[1879]: 2025-05-14T23:49:33.386796Z INFO Daemon Daemon Route to 168.63.129.16 exists May 14 23:49:33.392272 waagent[1879]: 2025-05-14T23:49:33.392213Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 14 23:49:33.440656 waagent[1879]: 2025-05-14T23:49:33.440608Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 14 23:49:33.447567 waagent[1879]: 2025-05-14T23:49:33.447535Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 14 23:49:33.453240 waagent[1879]: 2025-05-14T23:49:33.453181Z INFO Daemon Daemon Server preferred version:2015-04-05 May 14 23:49:33.640643 waagent[1879]: 2025-05-14T23:49:33.640472Z INFO Daemon Daemon Initializing goal state during protocol detection May 14 23:49:33.647601 waagent[1879]: 2025-05-14T23:49:33.647528Z INFO Daemon Daemon Forcing an update of the goal state. May 14 23:49:33.656790 waagent[1879]: 2025-05-14T23:49:33.656733Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 23:49:33.678660 waagent[1879]: 2025-05-14T23:49:33.678615Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 14 23:49:33.684666 waagent[1879]: 2025-05-14T23:49:33.684613Z INFO Daemon May 14 23:49:33.687612 waagent[1879]: 2025-05-14T23:49:33.687558Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bd8d48b8-1a28-4162-9f3a-66d59dae7f56 eTag: 737423395944469520 source: Fabric] May 14 23:49:33.699495 waagent[1879]: 2025-05-14T23:49:33.699445Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 14 23:49:33.706606 waagent[1879]: 2025-05-14T23:49:33.706555Z INFO Daemon May 14 23:49:33.709655 waagent[1879]: 2025-05-14T23:49:33.709602Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 14 23:49:33.720791 waagent[1879]: 2025-05-14T23:49:33.720754Z INFO Daemon Daemon Downloading artifacts profile blob May 14 23:49:33.886730 waagent[1879]: 2025-05-14T23:49:33.885871Z INFO Daemon Downloaded certificate {'thumbprint': '54FFBABEF1B167AE5F54DEB3AC22D0D843B459B7', 'hasPrivateKey': False} May 14 23:49:33.896397 waagent[1879]: 2025-05-14T23:49:33.896305Z INFO Daemon Downloaded certificate {'thumbprint': '110D270CCAA77B9618F93F5B3E69A1BEE8BE81FD', 'hasPrivateKey': True} May 14 23:49:33.906616 waagent[1879]: 2025-05-14T23:49:33.906559Z INFO Daemon Fetch goal state completed May 14 23:49:33.950680 waagent[1879]: 2025-05-14T23:49:33.950610Z INFO Daemon Daemon Starting provisioning May 14 23:49:33.955800 waagent[1879]: 2025-05-14T23:49:33.955733Z INFO Daemon Daemon Handle ovf-env.xml. May 14 23:49:33.960522 waagent[1879]: 2025-05-14T23:49:33.960463Z INFO Daemon Daemon Set hostname [ci-4230.1.1-n-00beb67e77] May 14 23:49:34.001714 waagent[1879]: 2025-05-14T23:49:33.999324Z INFO Daemon Daemon Publish hostname [ci-4230.1.1-n-00beb67e77] May 14 23:49:34.006248 waagent[1879]: 2025-05-14T23:49:34.006176Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 14 23:49:34.013378 waagent[1879]: 2025-05-14T23:49:34.013296Z INFO Daemon Daemon Primary interface is [eth0] May 14 23:49:34.028033 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:34.028042 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:34.028074 systemd-networkd[1343]: eth0: DHCP lease lost May 14 23:49:34.029924 waagent[1879]: 2025-05-14T23:49:34.029833Z INFO Daemon Daemon Create user account if not exists May 14 23:49:34.035716 waagent[1879]: 2025-05-14T23:49:34.035636Z INFO Daemon Daemon User core already exists, skip useradd May 14 23:49:34.041530 waagent[1879]: 2025-05-14T23:49:34.041448Z INFO Daemon Daemon Configure sudoer May 14 23:49:34.046236 waagent[1879]: 2025-05-14T23:49:34.046167Z INFO Daemon Daemon Configure sshd May 14 23:49:34.050908 waagent[1879]: 2025-05-14T23:49:34.050841Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 14 23:49:34.063983 waagent[1879]: 2025-05-14T23:49:34.063913Z INFO Daemon Daemon Deploy ssh public key. May 14 23:49:34.085779 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:35.257718 waagent[1879]: 2025-05-14T23:49:35.257658Z INFO Daemon Daemon Provisioning complete May 14 23:49:35.276529 waagent[1879]: 2025-05-14T23:49:35.276484Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 14 23:49:35.282962 waagent[1879]: 2025-05-14T23:49:35.282895Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 14 23:49:35.293550 waagent[1879]: 2025-05-14T23:49:35.293483Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 14 23:49:35.438761 waagent[1953]: 2025-05-14T23:49:35.438630Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 14 23:49:35.439178 waagent[1953]: 2025-05-14T23:49:35.438840Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.1 May 14 23:49:35.439178 waagent[1953]: 2025-05-14T23:49:35.438906Z INFO ExtHandler ExtHandler Python: 3.11.11 May 14 23:49:35.548234 waagent[1953]: 2025-05-14T23:49:35.548074Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 14 23:49:35.548375 waagent[1953]: 2025-05-14T23:49:35.548333Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:35.548445 waagent[1953]: 2025-05-14T23:49:35.548413Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:35.557173 waagent[1953]: 2025-05-14T23:49:35.557092Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 23:49:35.563447 waagent[1953]: 2025-05-14T23:49:35.563397Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 14 23:49:35.564111 waagent[1953]: 2025-05-14T23:49:35.564066Z INFO ExtHandler May 14 23:49:35.564196 waagent[1953]: 2025-05-14T23:49:35.564164Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3009ad7d-ad42-4dc8-9efc-be0af056fafa eTag: 737423395944469520 source: Fabric] May 14 23:49:35.564506 waagent[1953]: 2025-05-14T23:49:35.564466Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 14 23:49:35.565130 waagent[1953]: 2025-05-14T23:49:35.565079Z INFO ExtHandler May 14 23:49:35.565202 waagent[1953]: 2025-05-14T23:49:35.565171Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 14 23:49:35.569523 waagent[1953]: 2025-05-14T23:49:35.569481Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 14 23:49:35.663845 waagent[1953]: 2025-05-14T23:49:35.663747Z INFO ExtHandler Downloaded certificate {'thumbprint': '54FFBABEF1B167AE5F54DEB3AC22D0D843B459B7', 'hasPrivateKey': False} May 14 23:49:35.664307 waagent[1953]: 2025-05-14T23:49:35.664261Z INFO ExtHandler Downloaded certificate {'thumbprint': '110D270CCAA77B9618F93F5B3E69A1BEE8BE81FD', 'hasPrivateKey': True} May 14 23:49:35.664760 waagent[1953]: 2025-05-14T23:49:35.664684Z INFO ExtHandler Fetch goal state completed May 14 23:49:35.681847 waagent[1953]: 2025-05-14T23:49:35.681778Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1953 May 14 23:49:35.682021 waagent[1953]: 2025-05-14T23:49:35.681985Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 14 23:49:35.683792 waagent[1953]: 2025-05-14T23:49:35.683736Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.1', '', 'Flatcar Container Linux by Kinvolk'] May 14 23:49:35.684197 waagent[1953]: 2025-05-14T23:49:35.684159Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 14 23:49:35.897448 waagent[1953]: 2025-05-14T23:49:35.897398Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 14 23:49:35.897663 waagent[1953]: 2025-05-14T23:49:35.897620Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 14 23:49:35.904723 waagent[1953]: 2025-05-14T23:49:35.904362Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 14 23:49:35.911614 systemd[1]: Reload requested from client PID 1968 ('systemctl') (unit waagent.service)... May 14 23:49:35.911629 systemd[1]: Reloading... May 14 23:49:35.991736 zram_generator::config[2004]: No configuration found. May 14 23:49:36.122379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:36.226308 systemd[1]: Reloading finished in 314 ms. May 14 23:49:36.246727 waagent[1953]: 2025-05-14T23:49:36.241129Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 14 23:49:36.247609 systemd[1]: Reload requested from client PID 2061 ('systemctl') (unit waagent.service)... May 14 23:49:36.247752 systemd[1]: Reloading... May 14 23:49:36.345728 zram_generator::config[2103]: No configuration found. May 14 23:49:36.453317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:36.557969 systemd[1]: Reloading finished in 309 ms. May 14 23:49:36.574118 waagent[1953]: 2025-05-14T23:49:36.571133Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 14 23:49:36.574118 waagent[1953]: 2025-05-14T23:49:36.571330Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 14 23:49:36.965773 waagent[1953]: 2025-05-14T23:49:36.964972Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 14 23:49:36.965773 waagent[1953]: 2025-05-14T23:49:36.965604Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 14 23:49:36.966565 waagent[1953]: 2025-05-14T23:49:36.966474Z INFO ExtHandler ExtHandler Starting env monitor service. May 14 23:49:36.967073 waagent[1953]: 2025-05-14T23:49:36.967020Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:36.967207 waagent[1953]: 2025-05-14T23:49:36.967072Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 14 23:49:36.967498 waagent[1953]: 2025-05-14T23:49:36.967423Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 14 23:49:36.967679 waagent[1953]: 2025-05-14T23:49:36.967624Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 14 23:49:36.968113 waagent[1953]: 2025-05-14T23:49:36.967986Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 14 23:49:36.968320 waagent[1953]: 2025-05-14T23:49:36.968217Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:36.968431 waagent[1953]: 2025-05-14T23:49:36.968287Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:36.968628 waagent[1953]: 2025-05-14T23:49:36.968553Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 14 23:49:36.969119 waagent[1953]: 2025-05-14T23:49:36.969062Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 14 23:49:36.969232 waagent[1953]: 2025-05-14T23:49:36.969146Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:36.970068 waagent[1953]: 2025-05-14T23:49:36.969978Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 14 23:49:36.970232 waagent[1953]: 2025-05-14T23:49:36.970181Z INFO EnvHandler ExtHandler Configure routes May 14 23:49:36.970763 waagent[1953]: 2025-05-14T23:49:36.970617Z INFO EnvHandler ExtHandler Gateway:None May 14 23:49:36.971002 waagent[1953]: 2025-05-14T23:49:36.970949Z INFO EnvHandler ExtHandler Routes:None May 14 23:49:36.972180 waagent[1953]: 2025-05-14T23:49:36.972108Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 14 23:49:36.972180 waagent[1953]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 14 23:49:36.972180 waagent[1953]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 14 23:49:36.972180 waagent[1953]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 14 23:49:36.972180 waagent[1953]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:36.972180 waagent[1953]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:36.972180 waagent[1953]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:36.979634 waagent[1953]: 2025-05-14T23:49:36.978994Z INFO ExtHandler ExtHandler May 14 23:49:36.979634 waagent[1953]: 2025-05-14T23:49:36.979115Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5378c24f-d117-46da-a917-d2292f3f08dc correlation c0bb6c88-f75d-4412-bd48-56a496e483a7 created: 2025-05-14T23:48:19.021889Z] May 14 23:49:36.979634 waagent[1953]: 2025-05-14T23:49:36.979502Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 14 23:49:36.980389 waagent[1953]: 2025-05-14T23:49:36.980336Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 14 23:49:37.023637 waagent[1953]: 2025-05-14T23:49:37.023581Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F4A0A96E-A966-4173-9D42-682F24F0BDE0;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 14 23:49:37.065109 waagent[1953]: 2025-05-14T23:49:37.065024Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 14 23:49:37.065109 waagent[1953]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:37.065109 waagent[1953]: pkts bytes target prot opt in out source destination May 14 23:49:37.065109 waagent[1953]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:37.065109 waagent[1953]: pkts bytes target prot opt in out source destination May 14 23:49:37.065109 waagent[1953]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:37.065109 waagent[1953]: pkts bytes target prot opt in out source destination May 14 23:49:37.065109 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 23:49:37.065109 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 23:49:37.065109 waagent[1953]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 23:49:37.068070 waagent[1953]: 2025-05-14T23:49:37.067974Z INFO MonitorHandler ExtHandler Network interfaces: May 14 23:49:37.068070 waagent[1953]: Executing ['ip', '-a', '-o', 'link']: May 14 23:49:37.068070 waagent[1953]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 14 23:49:37.068070 waagent[1953]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6f:2d:b8 brd ff:ff:ff:ff:ff:ff May 14 23:49:37.068070 waagent[1953]: 3: enP16889s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6f:2d:b8 brd ff:ff:ff:ff:ff:ff\ altname enP16889p0s2 May 14 23:49:37.068070 waagent[1953]: Executing ['ip', '-4', '-a', '-o', 'address']: May 14 23:49:37.068070 waagent[1953]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 14 23:49:37.068070 waagent[1953]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 14 23:49:37.068070 waagent[1953]: Executing ['ip', '-6', '-a', '-o', 'address']: May 14 23:49:37.068070 waagent[1953]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 14 23:49:37.068070 waagent[1953]: 2: eth0 inet6 fe80::20d:3aff:fe6f:2db8/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 23:49:37.068070 waagent[1953]: 3: enP16889s1 inet6 fe80::20d:3aff:fe6f:2db8/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 23:49:37.107424 waagent[1953]: 2025-05-14T23:49:37.107321Z INFO EnvHandler ExtHandler Current Firewall rules: May 14 23:49:37.107424 waagent[1953]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:37.107424 waagent[1953]: pkts bytes target prot opt in out source destination May 14 23:49:37.107424 waagent[1953]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:37.107424 waagent[1953]: pkts bytes target prot opt in out source destination May 14 23:49:37.107424 waagent[1953]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:37.107424 waagent[1953]: pkts bytes target prot opt in out source destination May 14 23:49:37.107424 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 23:49:37.107424 waagent[1953]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 23:49:37.107424 waagent[1953]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 23:49:37.107740 waagent[1953]: 2025-05-14T23:49:37.107681Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 14 23:49:41.805949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:49:41.813866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:41.906639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:41.910207 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:42.017710 kubelet[2195]: E0514 23:49:42.017643 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:42.020505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:42.020641 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:42.021112 systemd[1]: kubelet.service: Consumed 118ms CPU time, 98.6M memory peak. May 14 23:49:52.056148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:49:52.065903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:52.176885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:52.188108 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:52.244977 kubelet[2211]: E0514 23:49:52.244927 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:52.248109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:52.248309 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:52.248990 systemd[1]: kubelet.service: Consumed 132ms CPU time, 94.6M memory peak. May 14 23:49:53.658939 chronyd[1707]: Selected source PHC0 May 14 23:50:02.306063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:50:02.310900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:02.551089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:02.556205 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:02.595424 kubelet[2226]: E0514 23:50:02.595335 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:02.598357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:02.598513 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:02.599098 systemd[1]: kubelet.service: Consumed 131ms CPU time, 96.4M memory peak. May 14 23:50:05.884064 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:50:05.890036 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:42346.service - OpenSSH per-connection server daemon (10.200.16.10:42346). May 14 23:50:06.529531 sshd[2234]: Accepted publickey for core from 10.200.16.10 port 42346 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:06.530962 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:06.535387 systemd-logind[1717]: New session 3 of user core. May 14 23:50:06.542869 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:50:06.959941 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:42356.service - OpenSSH per-connection server daemon (10.200.16.10:42356). May 14 23:50:07.440223 sshd[2239]: Accepted publickey for core from 10.200.16.10 port 42356 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:07.441883 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:07.446679 systemd-logind[1717]: New session 4 of user core. May 14 23:50:07.454927 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:50:07.785875 sshd[2241]: Connection closed by 10.200.16.10 port 42356 May 14 23:50:07.786635 sshd-session[2239]: pam_unix(sshd:session): session closed for user core May 14 23:50:07.790366 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:42356.service: Deactivated successfully. May 14 23:50:07.793203 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:50:07.794279 systemd-logind[1717]: Session 4 logged out. Waiting for processes to exit. May 14 23:50:07.795230 systemd-logind[1717]: Removed session 4. May 14 23:50:07.872267 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:42370.service - OpenSSH per-connection server daemon (10.200.16.10:42370). May 14 23:50:08.353602 sshd[2247]: Accepted publickey for core from 10.200.16.10 port 42370 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:08.355036 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:08.359321 systemd-logind[1717]: New session 5 of user core. May 14 23:50:08.370852 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:50:08.694385 sshd[2249]: Connection closed by 10.200.16.10 port 42370 May 14 23:50:08.694957 sshd-session[2247]: pam_unix(sshd:session): session closed for user core May 14 23:50:08.698766 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:42370.service: Deactivated successfully. May 14 23:50:08.700436 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:50:08.702221 systemd-logind[1717]: Session 5 logged out. Waiting for processes to exit. May 14 23:50:08.703163 systemd-logind[1717]: Removed session 5. May 14 23:50:08.790920 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:45786.service - OpenSSH per-connection server daemon (10.200.16.10:45786). May 14 23:50:09.268489 sshd[2255]: Accepted publickey for core from 10.200.16.10 port 45786 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:09.269864 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:09.274103 systemd-logind[1717]: New session 6 of user core. May 14 23:50:09.281885 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:50:09.613225 sshd[2257]: Connection closed by 10.200.16.10 port 45786 May 14 23:50:09.613827 sshd-session[2255]: pam_unix(sshd:session): session closed for user core May 14 23:50:09.616932 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:45786.service: Deactivated successfully. May 14 23:50:09.619032 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:50:09.620726 systemd-logind[1717]: Session 6 logged out. Waiting for processes to exit. May 14 23:50:09.621638 systemd-logind[1717]: Removed session 6. May 14 23:50:09.702977 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:45792.service - OpenSSH per-connection server daemon (10.200.16.10:45792). May 14 23:50:10.150919 sshd[2263]: Accepted publickey for core from 10.200.16.10 port 45792 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:10.152219 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:10.156354 systemd-logind[1717]: New session 7 of user core. May 14 23:50:10.165849 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:50:10.521426 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:50:10.521743 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:10.534753 sudo[2266]: pam_unix(sudo:session): session closed for user root May 14 23:50:10.607013 sshd[2265]: Connection closed by 10.200.16.10 port 45792 May 14 23:50:10.606001 sshd-session[2263]: pam_unix(sshd:session): session closed for user core May 14 23:50:10.610337 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:45792.service: Deactivated successfully. May 14 23:50:10.613432 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:50:10.617256 systemd-logind[1717]: Session 7 logged out. Waiting for processes to exit. May 14 23:50:10.618737 systemd-logind[1717]: Removed session 7. May 14 23:50:10.701182 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:45798.service - OpenSSH per-connection server daemon (10.200.16.10:45798). May 14 23:50:11.181619 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 45798 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:11.183094 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:11.187673 systemd-logind[1717]: New session 8 of user core. May 14 23:50:11.193850 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:50:11.451994 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:50:11.453021 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:11.456389 sudo[2276]: pam_unix(sudo:session): session closed for user root May 14 23:50:11.461068 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:50:11.461325 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:11.472034 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:50:11.499450 augenrules[2298]: No rules May 14 23:50:11.501144 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:50:11.501439 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:50:11.503021 sudo[2275]: pam_unix(sudo:session): session closed for user root May 14 23:50:11.578608 sshd[2274]: Connection closed by 10.200.16.10 port 45798 May 14 23:50:11.579259 sshd-session[2272]: pam_unix(sshd:session): session closed for user core May 14 23:50:11.582749 systemd-logind[1717]: Session 8 logged out. Waiting for processes to exit. May 14 23:50:11.582992 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:45798.service: Deactivated successfully. May 14 23:50:11.584997 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:50:11.586998 systemd-logind[1717]: Removed session 8. May 14 23:50:11.674964 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:45802.service - OpenSSH per-connection server daemon (10.200.16.10:45802). May 14 23:50:12.156429 sshd[2307]: Accepted publickey for core from 10.200.16.10 port 45802 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:12.157830 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:12.162166 systemd-logind[1717]: New session 9 of user core. May 14 23:50:12.172886 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:50:12.428627 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:50:12.429041 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:12.807032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 23:50:12.815747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:13.032012 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 14 23:50:13.159913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:13.164454 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:13.205704 kubelet[2330]: E0514 23:50:13.205640 2330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:13.208483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:13.208905 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:13.209498 systemd[1]: kubelet.service: Consumed 134ms CPU time, 92.6M memory peak. May 14 23:50:14.344147 (dockerd)[2342]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:50:14.344343 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:50:15.353619 dockerd[2342]: time="2025-05-14T23:50:15.353554174Z" level=info msg="Starting up" May 14 23:50:15.431991 update_engine[1726]: I20250514 23:50:15.431917 1726 update_attempter.cc:509] Updating boot flags... May 14 23:50:15.519740 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2366) May 14 23:50:15.656727 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2369) May 14 23:50:15.684212 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1310074084-merged.mount: Deactivated successfully. May 14 23:50:15.766312 dockerd[2342]: time="2025-05-14T23:50:15.766266875Z" level=info msg="Loading containers: start." May 14 23:50:15.980740 kernel: Initializing XFRM netlink socket May 14 23:50:16.122139 systemd-networkd[1343]: docker0: Link UP May 14 23:50:16.148283 dockerd[2342]: time="2025-05-14T23:50:16.147754654Z" level=info msg="Loading containers: done." May 14 23:50:16.164834 dockerd[2342]: time="2025-05-14T23:50:16.164790314Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:50:16.164981 dockerd[2342]: time="2025-05-14T23:50:16.164892714Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:50:16.165030 dockerd[2342]: time="2025-05-14T23:50:16.165007593Z" level=info msg="Daemon has completed initialization" May 14 23:50:16.212836 dockerd[2342]: time="2025-05-14T23:50:16.212743616Z" level=info msg="API listen on /run/docker.sock" May 14 23:50:16.212981 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:50:17.234449 containerd[1746]: time="2025-05-14T23:50:17.234403781Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 23:50:18.155343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009499348.mount: Deactivated successfully. May 14 23:50:19.242739 containerd[1746]: time="2025-05-14T23:50:19.242325976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:19.245860 containerd[1746]: time="2025-05-14T23:50:19.245607331Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 14 23:50:19.248508 containerd[1746]: time="2025-05-14T23:50:19.248463527Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:19.252858 containerd[1746]: time="2025-05-14T23:50:19.252815481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:19.254028 containerd[1746]: time="2025-05-14T23:50:19.253846119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.019397938s" May 14 23:50:19.254028 containerd[1746]: time="2025-05-14T23:50:19.253885839Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 23:50:19.254864 containerd[1746]: time="2025-05-14T23:50:19.254650558Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 23:50:20.429663 containerd[1746]: time="2025-05-14T23:50:20.429538883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:20.433159 containerd[1746]: time="2025-05-14T23:50:20.432944038Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 14 23:50:20.435803 containerd[1746]: time="2025-05-14T23:50:20.435768474Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:20.442406 containerd[1746]: time="2025-05-14T23:50:20.442368905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:20.443609 containerd[1746]: time="2025-05-14T23:50:20.443495263Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.188812985s" May 14 23:50:20.443609 containerd[1746]: time="2025-05-14T23:50:20.443525103Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 23:50:20.444187 containerd[1746]: time="2025-05-14T23:50:20.444020703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 23:50:21.522782 containerd[1746]: time="2025-05-14T23:50:21.522722805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:21.527484 containerd[1746]: time="2025-05-14T23:50:21.527434598Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 14 23:50:21.532142 containerd[1746]: time="2025-05-14T23:50:21.532098472Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:21.538236 containerd[1746]: time="2025-05-14T23:50:21.538182383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:21.539368 containerd[1746]: time="2025-05-14T23:50:21.539211861Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.095161358s" May 14 23:50:21.539368 containerd[1746]: time="2025-05-14T23:50:21.539242981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 23:50:21.539689 containerd[1746]: time="2025-05-14T23:50:21.539645221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 23:50:22.581107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521039511.mount: Deactivated successfully. May 14 23:50:22.921659 containerd[1746]: time="2025-05-14T23:50:22.921528891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:22.924467 containerd[1746]: time="2025-05-14T23:50:22.924426047Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 14 23:50:22.928967 containerd[1746]: time="2025-05-14T23:50:22.928930160Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:22.933948 containerd[1746]: time="2025-05-14T23:50:22.933901673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:22.934576 containerd[1746]: time="2025-05-14T23:50:22.934447592Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.394769851s" May 14 23:50:22.934576 containerd[1746]: time="2025-05-14T23:50:22.934480152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 23:50:22.935065 containerd[1746]: time="2025-05-14T23:50:22.935033312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:50:23.305907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 23:50:23.315857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:23.409454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:23.417951 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:23.451024 kubelet[2716]: E0514 23:50:23.450953 2716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:23.453804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:23.454054 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:23.454574 systemd[1]: kubelet.service: Consumed 111ms CPU time, 96.4M memory peak. May 14 23:50:23.874027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819522483.mount: Deactivated successfully. May 14 23:50:25.351158 containerd[1746]: time="2025-05-14T23:50:25.351003027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:25.356109 containerd[1746]: time="2025-05-14T23:50:25.356048660Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 14 23:50:25.359820 containerd[1746]: time="2025-05-14T23:50:25.359754175Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:25.365660 containerd[1746]: time="2025-05-14T23:50:25.365622287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:25.366803 containerd[1746]: time="2025-05-14T23:50:25.366768165Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.431700733s" May 14 23:50:25.366803 containerd[1746]: time="2025-05-14T23:50:25.366803765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 23:50:25.367418 containerd[1746]: time="2025-05-14T23:50:25.367313124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:50:25.958563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216388615.mount: Deactivated successfully. May 14 23:50:25.984750 containerd[1746]: time="2025-05-14T23:50:25.984219771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:25.986786 containerd[1746]: time="2025-05-14T23:50:25.986723208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 23:50:25.992544 containerd[1746]: time="2025-05-14T23:50:25.992495441Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:25.997845 containerd[1746]: time="2025-05-14T23:50:25.997755714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:25.999099 containerd[1746]: time="2025-05-14T23:50:25.998424833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 631.076189ms" May 14 23:50:25.999099 containerd[1746]: time="2025-05-14T23:50:25.998459633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:50:25.999099 containerd[1746]: time="2025-05-14T23:50:25.998899553Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 23:50:26.648246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1454305938.mount: Deactivated successfully. May 14 23:50:29.501394 containerd[1746]: time="2025-05-14T23:50:29.501348060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:29.503661 containerd[1746]: time="2025-05-14T23:50:29.503624857Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 14 23:50:29.506318 containerd[1746]: time="2025-05-14T23:50:29.506275814Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:29.510801 containerd[1746]: time="2025-05-14T23:50:29.510747448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:29.512262 containerd[1746]: time="2025-05-14T23:50:29.512130727Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.513208534s" May 14 23:50:29.512262 containerd[1746]: time="2025-05-14T23:50:29.512164807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 23:50:33.556365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 23:50:33.565944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:33.807057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:33.811039 (kubelet)[2852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:33.850839 kubelet[2852]: E0514 23:50:33.850800 2852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:33.853533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:33.853940 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:33.854518 systemd[1]: kubelet.service: Consumed 115ms CPU time, 92.5M memory peak. May 14 23:50:35.470208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:35.470363 systemd[1]: kubelet.service: Consumed 115ms CPU time, 92.5M memory peak. May 14 23:50:35.479022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:35.502589 systemd[1]: Reload requested from client PID 2867 ('systemctl') (unit session-9.scope)... May 14 23:50:35.502608 systemd[1]: Reloading... May 14 23:50:35.621733 zram_generator::config[2910]: No configuration found. May 14 23:50:35.727052 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:50:35.829775 systemd[1]: Reloading finished in 326 ms. May 14 23:50:35.868454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:35.871646 (kubelet)[2972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:50:35.875757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:35.876284 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:50:35.876783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:35.876820 systemd[1]: kubelet.service: Consumed 83ms CPU time, 83.8M memory peak. May 14 23:50:35.880941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:35.975968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:35.979321 (kubelet)[2988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:50:36.012724 kubelet[2988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:36.012724 kubelet[2988]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:50:36.012724 kubelet[2988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:36.012724 kubelet[2988]: I0514 23:50:36.011753 2988 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:50:36.974312 kubelet[2988]: I0514 23:50:36.974273 2988 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:50:36.974312 kubelet[2988]: I0514 23:50:36.974303 2988 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:50:36.974561 kubelet[2988]: I0514 23:50:36.974540 2988 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:50:36.999834 kubelet[2988]: E0514 23:50:36.999772 2988 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:37.000477 kubelet[2988]: I0514 23:50:37.000362 2988 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:50:37.006334 kubelet[2988]: E0514 23:50:37.006292 2988 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:50:37.006334 kubelet[2988]: I0514 23:50:37.006332 2988 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:50:37.010056 kubelet[2988]: I0514 23:50:37.010035 2988 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:50:37.010667 kubelet[2988]: I0514 23:50:37.010647 2988 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:50:37.010819 kubelet[2988]: I0514 23:50:37.010790 2988 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:50:37.010981 kubelet[2988]: I0514 23:50:37.010820 2988 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-00beb67e77","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:50:37.011065 kubelet[2988]: I0514 23:50:37.010990 2988 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:50:37.011065 kubelet[2988]: I0514 23:50:37.010999 2988 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:50:37.011131 kubelet[2988]: I0514 23:50:37.011112 2988 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:37.012957 kubelet[2988]: I0514 23:50:37.012716 2988 kubelet.go:408] "Attempting to sync node with API server" May 14 23:50:37.012957 kubelet[2988]: I0514 23:50:37.012745 2988 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:50:37.012957 kubelet[2988]: I0514 23:50:37.012773 2988 kubelet.go:314] "Adding apiserver pod source" May 14 23:50:37.012957 kubelet[2988]: I0514 23:50:37.012783 2988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:50:37.016090 kubelet[2988]: W0514 23:50:37.015770 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-00beb67e77&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:37.016090 kubelet[2988]: E0514 23:50:37.015824 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-00beb67e77&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:37.016784 kubelet[2988]: W0514 23:50:37.016747 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:37.016888 kubelet[2988]: E0514 23:50:37.016873 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:37.017028 kubelet[2988]: I0514 23:50:37.017014 2988 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:50:37.018747 kubelet[2988]: I0514 23:50:37.018629 2988 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:50:37.019746 kubelet[2988]: W0514 23:50:37.019123 2988 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:50:37.020016 kubelet[2988]: I0514 23:50:37.020002 2988 server.go:1269] "Started kubelet" May 14 23:50:37.022747 kubelet[2988]: I0514 23:50:37.022709 2988 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:50:37.023602 kubelet[2988]: I0514 23:50:37.023546 2988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:50:37.023993 kubelet[2988]: I0514 23:50:37.023972 2988 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:50:37.025515 kubelet[2988]: I0514 23:50:37.025493 2988 server.go:460] "Adding debug handlers to kubelet server" May 14 23:50:37.026624 kubelet[2988]: I0514 23:50:37.026593 2988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:50:37.028000 kubelet[2988]: I0514 23:50:37.027973 2988 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:50:37.028325 kubelet[2988]: E0514 23:50:37.027241 2988 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-00beb67e77.183f89bfbb7780e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-00beb67e77,UID:ci-4230.1.1-n-00beb67e77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-00beb67e77,},FirstTimestamp:2025-05-14 23:50:37.019971816 +0000 UTC m=+1.037478973,LastTimestamp:2025-05-14 23:50:37.019971816 +0000 UTC m=+1.037478973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-00beb67e77,}" May 14 23:50:37.029402 kubelet[2988]: I0514 23:50:37.029376 2988 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:50:37.030246 kubelet[2988]: E0514 23:50:37.030224 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:37.031317 kubelet[2988]: E0514 23:50:37.031270 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-00beb67e77?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" May 14 23:50:37.031930 kubelet[2988]: I0514 23:50:37.031907 2988 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:50:37.032340 kubelet[2988]: I0514 23:50:37.032055 2988 reconciler.go:26] "Reconciler: start to sync state" May 14 23:50:37.032430 kubelet[2988]: W0514 23:50:37.032396 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:37.032460 kubelet[2988]: E0514 23:50:37.032445 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:37.033969 kubelet[2988]: I0514 23:50:37.033768 2988 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:50:37.036797 kubelet[2988]: E0514 23:50:37.036769 2988 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:50:37.037153 kubelet[2988]: I0514 23:50:37.037129 2988 factory.go:221] Registration of the containerd container factory successfully May 14 23:50:37.037153 kubelet[2988]: I0514 23:50:37.037148 2988 factory.go:221] Registration of the systemd container factory successfully May 14 23:50:37.051139 kubelet[2988]: I0514 23:50:37.051106 2988 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:50:37.051335 kubelet[2988]: I0514 23:50:37.051264 2988 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:50:37.051335 kubelet[2988]: I0514 23:50:37.051285 2988 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:37.057103 kubelet[2988]: I0514 23:50:37.057020 2988 policy_none.go:49] "None policy: Start" May 14 23:50:37.057964 kubelet[2988]: I0514 23:50:37.057647 2988 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:50:37.057964 kubelet[2988]: I0514 23:50:37.057673 2988 state_mem.go:35] "Initializing new in-memory state store" May 14 23:50:37.069458 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:50:37.073738 kubelet[2988]: I0514 23:50:37.073599 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:50:37.076102 kubelet[2988]: I0514 23:50:37.076071 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:50:37.076102 kubelet[2988]: I0514 23:50:37.076095 2988 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:50:37.076210 kubelet[2988]: I0514 23:50:37.076120 2988 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:50:37.076210 kubelet[2988]: E0514 23:50:37.076158 2988 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:50:37.078629 kubelet[2988]: W0514 23:50:37.078348 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:37.078629 kubelet[2988]: E0514 23:50:37.078394 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:37.081568 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:50:37.085849 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:50:37.097513 kubelet[2988]: I0514 23:50:37.097486 2988 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:50:37.097713 kubelet[2988]: I0514 23:50:37.097680 2988 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:50:37.097760 kubelet[2988]: I0514 23:50:37.097715 2988 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:50:37.098391 kubelet[2988]: I0514 23:50:37.098364 2988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:50:37.099958 kubelet[2988]: E0514 23:50:37.099913 2988 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:37.186655 systemd[1]: Created slice kubepods-burstable-pod02d3ecdbd73a09fdd2876e7eb086efcd.slice - libcontainer container kubepods-burstable-pod02d3ecdbd73a09fdd2876e7eb086efcd.slice. May 14 23:50:37.201040 kubelet[2988]: I0514 23:50:37.200899 2988 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:37.201244 kubelet[2988]: E0514 23:50:37.201203 2988 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:37.204401 systemd[1]: Created slice kubepods-burstable-podfa9ec0e0b055e65ff68deb6fb3f4a288.slice - libcontainer container kubepods-burstable-podfa9ec0e0b055e65ff68deb6fb3f4a288.slice. May 14 23:50:37.215994 systemd[1]: Created slice kubepods-burstable-podccfdb9987a6789fa2ac2564ecaa6a5f6.slice - libcontainer container kubepods-burstable-podccfdb9987a6789fa2ac2564ecaa6a5f6.slice. May 14 23:50:37.231958 kubelet[2988]: E0514 23:50:37.231871 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-00beb67e77?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" May 14 23:50:37.326631 kubelet[2988]: E0514 23:50:37.326489 2988 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-00beb67e77.183f89bfbb7780e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-00beb67e77,UID:ci-4230.1.1-n-00beb67e77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-00beb67e77,},FirstTimestamp:2025-05-14 23:50:37.019971816 +0000 UTC m=+1.037478973,LastTimestamp:2025-05-14 23:50:37.019971816 +0000 UTC m=+1.037478973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-00beb67e77,}" May 14 23:50:37.332875 kubelet[2988]: I0514 23:50:37.332847 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02d3ecdbd73a09fdd2876e7eb086efcd-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-00beb67e77\" (UID: \"02d3ecdbd73a09fdd2876e7eb086efcd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.332914 kubelet[2988]: I0514 23:50:37.332884 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.333067 kubelet[2988]: I0514 23:50:37.333047 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.333095 kubelet[2988]: I0514 23:50:37.333076 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02d3ecdbd73a09fdd2876e7eb086efcd-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-00beb67e77\" (UID: \"02d3ecdbd73a09fdd2876e7eb086efcd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.333118 kubelet[2988]: I0514 23:50:37.333102 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02d3ecdbd73a09fdd2876e7eb086efcd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-00beb67e77\" (UID: \"02d3ecdbd73a09fdd2876e7eb086efcd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.333140 kubelet[2988]: I0514 23:50:37.333119 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.333140 kubelet[2988]: I0514 23:50:37.333136 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.333183 kubelet[2988]: I0514 23:50:37.333151 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.333183 kubelet[2988]: I0514 23:50:37.333175 2988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccfdb9987a6789fa2ac2564ecaa6a5f6-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-00beb67e77\" (UID: \"ccfdb9987a6789fa2ac2564ecaa6a5f6\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-00beb67e77" May 14 23:50:37.403417 kubelet[2988]: I0514 23:50:37.403388 2988 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:37.403709 kubelet[2988]: E0514 23:50:37.403671 2988 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:37.501997 containerd[1746]: time="2025-05-14T23:50:37.501865669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-00beb67e77,Uid:02d3ecdbd73a09fdd2876e7eb086efcd,Namespace:kube-system,Attempt:0,}" May 14 23:50:37.514576 containerd[1746]: time="2025-05-14T23:50:37.514539693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-00beb67e77,Uid:fa9ec0e0b055e65ff68deb6fb3f4a288,Namespace:kube-system,Attempt:0,}" May 14 23:50:37.518397 containerd[1746]: time="2025-05-14T23:50:37.518328048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-00beb67e77,Uid:ccfdb9987a6789fa2ac2564ecaa6a5f6,Namespace:kube-system,Attempt:0,}" May 14 23:50:37.633442 kubelet[2988]: E0514 23:50:37.633400 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-00beb67e77?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" May 14 23:50:37.805937 kubelet[2988]: I0514 23:50:37.805675 2988 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:37.806044 kubelet[2988]: E0514 23:50:37.805997 2988 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:37.904933 kubelet[2988]: W0514 23:50:37.904893 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:37.905076 kubelet[2988]: E0514 23:50:37.904942 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:37.921728 kubelet[2988]: W0514 23:50:37.921674 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:37.921825 kubelet[2988]: E0514 23:50:37.921738 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:37.981040 kubelet[2988]: W0514 23:50:37.980935 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-00beb67e77&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:37.981040 kubelet[2988]: E0514 23:50:37.981005 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-00beb67e77&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:38.124241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134130480.mount: Deactivated successfully. May 14 23:50:38.150743 containerd[1746]: time="2025-05-14T23:50:38.150132586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:38.162275 containerd[1746]: time="2025-05-14T23:50:38.162211411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 14 23:50:38.165980 containerd[1746]: time="2025-05-14T23:50:38.165932006Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:38.174430 containerd[1746]: time="2025-05-14T23:50:38.174388035Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:38.178349 containerd[1746]: time="2025-05-14T23:50:38.178176030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:50:38.182370 containerd[1746]: time="2025-05-14T23:50:38.181633785Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:38.185139 containerd[1746]: time="2025-05-14T23:50:38.185103181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:38.186049 containerd[1746]: time="2025-05-14T23:50:38.186021380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 684.027031ms" May 14 23:50:38.187995 containerd[1746]: time="2025-05-14T23:50:38.187922977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:50:38.192188 containerd[1746]: time="2025-05-14T23:50:38.192148612Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 677.538359ms" May 14 23:50:38.218152 containerd[1746]: time="2025-05-14T23:50:38.218111458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 699.60645ms" May 14 23:50:38.237592 kubelet[2988]: W0514 23:50:38.237489 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:38.237592 kubelet[2988]: E0514 23:50:38.237558 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:38.435784 kubelet[2988]: E0514 23:50:38.435310 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-00beb67e77?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" May 14 23:50:38.608415 kubelet[2988]: I0514 23:50:38.608163 2988 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:38.608648 kubelet[2988]: E0514 23:50:38.608622 2988 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:38.964865 containerd[1746]: time="2025-05-14T23:50:38.964295128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:38.964865 containerd[1746]: time="2025-05-14T23:50:38.964644168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:38.964865 containerd[1746]: time="2025-05-14T23:50:38.964656568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:38.964865 containerd[1746]: time="2025-05-14T23:50:38.964748727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:38.967991 containerd[1746]: time="2025-05-14T23:50:38.967639404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:38.968144 containerd[1746]: time="2025-05-14T23:50:38.967980843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:38.968144 containerd[1746]: time="2025-05-14T23:50:38.968041283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:38.969450 containerd[1746]: time="2025-05-14T23:50:38.969289402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:38.970828 containerd[1746]: time="2025-05-14T23:50:38.970648160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:38.970828 containerd[1746]: time="2025-05-14T23:50:38.970747400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:38.970828 containerd[1746]: time="2025-05-14T23:50:38.970763520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:38.972562 containerd[1746]: time="2025-05-14T23:50:38.970958879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:38.994864 systemd[1]: Started cri-containerd-db9cb8f300a1e82d06cc02e003de8959ad3c6929b6f99f2e8326d440c5a29566.scope - libcontainer container db9cb8f300a1e82d06cc02e003de8959ad3c6929b6f99f2e8326d440c5a29566. May 14 23:50:38.996689 systemd[1]: Started cri-containerd-df12b93d5ec5d7719ef2a296fb381f23748afc2815500079f6ef3d747d3c65d9.scope - libcontainer container df12b93d5ec5d7719ef2a296fb381f23748afc2815500079f6ef3d747d3c65d9. May 14 23:50:39.000880 systemd[1]: Started cri-containerd-ece5a7911ac43b5e844ce446e7ce0e0c39834056f3b9245e5ba0f28ab759e0a4.scope - libcontainer container ece5a7911ac43b5e844ce446e7ce0e0c39834056f3b9245e5ba0f28ab759e0a4. May 14 23:50:39.048942 containerd[1746]: time="2025-05-14T23:50:39.048827858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-00beb67e77,Uid:ccfdb9987a6789fa2ac2564ecaa6a5f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9cb8f300a1e82d06cc02e003de8959ad3c6929b6f99f2e8326d440c5a29566\"" May 14 23:50:39.056059 containerd[1746]: time="2025-05-14T23:50:39.055986409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-00beb67e77,Uid:fa9ec0e0b055e65ff68deb6fb3f4a288,Namespace:kube-system,Attempt:0,} returns sandbox id \"df12b93d5ec5d7719ef2a296fb381f23748afc2815500079f6ef3d747d3c65d9\"" May 14 23:50:39.058713 containerd[1746]: time="2025-05-14T23:50:39.058638685Z" level=info msg="CreateContainer within sandbox \"db9cb8f300a1e82d06cc02e003de8959ad3c6929b6f99f2e8326d440c5a29566\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:50:39.064335 containerd[1746]: time="2025-05-14T23:50:39.064299718Z" level=info msg="CreateContainer within sandbox \"df12b93d5ec5d7719ef2a296fb381f23748afc2815500079f6ef3d747d3c65d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:50:39.065657 containerd[1746]: time="2025-05-14T23:50:39.065627156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-00beb67e77,Uid:02d3ecdbd73a09fdd2876e7eb086efcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece5a7911ac43b5e844ce446e7ce0e0c39834056f3b9245e5ba0f28ab759e0a4\"" May 14 23:50:39.069052 containerd[1746]: time="2025-05-14T23:50:39.069021232Z" level=info msg="CreateContainer within sandbox \"ece5a7911ac43b5e844ce446e7ce0e0c39834056f3b9245e5ba0f28ab759e0a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:50:39.093211 kubelet[2988]: E0514 23:50:39.093163 2988 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:39.744991 kubelet[2988]: W0514 23:50:39.744930 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:39.745342 kubelet[2988]: E0514 23:50:39.744999 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:39.846003 kubelet[2988]: W0514 23:50:39.845950 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:39.846078 kubelet[2988]: E0514 23:50:39.846014 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:39.908826 kubelet[2988]: W0514 23:50:39.908771 2988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-00beb67e77&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused May 14 23:50:40.043269 kubelet[2988]: E0514 23:50:39.908834 2988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-00beb67e77&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:40.043269 kubelet[2988]: E0514 23:50:40.035819 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-00beb67e77?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="3.2s" May 14 23:50:40.053170 containerd[1746]: time="2025-05-14T23:50:40.053126233Z" level=info msg="CreateContainer within sandbox \"db9cb8f300a1e82d06cc02e003de8959ad3c6929b6f99f2e8326d440c5a29566\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"450f79e5c5e1ad71f71fa13c7fff65e134848471ec3fb575be050b5ea08adcc1\"" May 14 23:50:40.053795 containerd[1746]: time="2025-05-14T23:50:40.053767832Z" level=info msg="StartContainer for \"450f79e5c5e1ad71f71fa13c7fff65e134848471ec3fb575be050b5ea08adcc1\"" May 14 23:50:40.065219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338028165.mount: Deactivated successfully. May 14 23:50:40.079902 containerd[1746]: time="2025-05-14T23:50:40.079429078Z" level=info msg="CreateContainer within sandbox \"df12b93d5ec5d7719ef2a296fb381f23748afc2815500079f6ef3d747d3c65d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2d6e42e131a2b52368349fe79ce4b6562ec82595caac516913247e35c08791ef\"" May 14 23:50:40.080295 containerd[1746]: time="2025-05-14T23:50:40.080271357Z" level=info msg="StartContainer for \"2d6e42e131a2b52368349fe79ce4b6562ec82595caac516913247e35c08791ef\"" May 14 23:50:40.090235 systemd[1]: Started cri-containerd-450f79e5c5e1ad71f71fa13c7fff65e134848471ec3fb575be050b5ea08adcc1.scope - libcontainer container 450f79e5c5e1ad71f71fa13c7fff65e134848471ec3fb575be050b5ea08adcc1. May 14 23:50:40.092040 containerd[1746]: time="2025-05-14T23:50:40.092007382Z" level=info msg="CreateContainer within sandbox \"ece5a7911ac43b5e844ce446e7ce0e0c39834056f3b9245e5ba0f28ab759e0a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"20ef6e2f377e359843dc9e5755e828ab6e2ce4f1ccb8c9c415a115477e2cc98e\"" May 14 23:50:40.092981 containerd[1746]: time="2025-05-14T23:50:40.092914901Z" level=info msg="StartContainer for \"20ef6e2f377e359843dc9e5755e828ab6e2ce4f1ccb8c9c415a115477e2cc98e\"" May 14 23:50:40.124775 systemd[1]: run-containerd-runc-k8s.io-450f79e5c5e1ad71f71fa13c7fff65e134848471ec3fb575be050b5ea08adcc1-runc.4OKLMt.mount: Deactivated successfully. May 14 23:50:40.134859 systemd[1]: Started cri-containerd-2d6e42e131a2b52368349fe79ce4b6562ec82595caac516913247e35c08791ef.scope - libcontainer container 2d6e42e131a2b52368349fe79ce4b6562ec82595caac516913247e35c08791ef. May 14 23:50:40.138626 systemd[1]: Started cri-containerd-20ef6e2f377e359843dc9e5755e828ab6e2ce4f1ccb8c9c415a115477e2cc98e.scope - libcontainer container 20ef6e2f377e359843dc9e5755e828ab6e2ce4f1ccb8c9c415a115477e2cc98e. May 14 23:50:40.161844 containerd[1746]: time="2025-05-14T23:50:40.161709691Z" level=info msg="StartContainer for \"450f79e5c5e1ad71f71fa13c7fff65e134848471ec3fb575be050b5ea08adcc1\" returns successfully" May 14 23:50:40.192312 containerd[1746]: time="2025-05-14T23:50:40.192104012Z" level=info msg="StartContainer for \"2d6e42e131a2b52368349fe79ce4b6562ec82595caac516913247e35c08791ef\" returns successfully" May 14 23:50:40.199283 containerd[1746]: time="2025-05-14T23:50:40.199183723Z" level=info msg="StartContainer for \"20ef6e2f377e359843dc9e5755e828ab6e2ce4f1ccb8c9c415a115477e2cc98e\" returns successfully" May 14 23:50:40.213324 kubelet[2988]: I0514 23:50:40.213179 2988 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:40.214036 kubelet[2988]: E0514 23:50:40.213999 2988 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:43.239207 kubelet[2988]: E0514 23:50:43.239154 2988 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-00beb67e77\" not found" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:43.416393 kubelet[2988]: I0514 23:50:43.416101 2988 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:43.475766 kubelet[2988]: I0514 23:50:43.475337 2988 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:43.475766 kubelet[2988]: E0514 23:50:43.475374 2988 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.1.1-n-00beb67e77\": node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:43.567735 kubelet[2988]: E0514 23:50:43.567679 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:43.668335 kubelet[2988]: E0514 23:50:43.668297 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:43.768601 kubelet[2988]: E0514 23:50:43.768566 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:43.869311 kubelet[2988]: E0514 23:50:43.869024 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:43.969639 kubelet[2988]: E0514 23:50:43.969599 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.070919 kubelet[2988]: E0514 23:50:44.070872 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.171529 kubelet[2988]: E0514 23:50:44.171383 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.272087 kubelet[2988]: E0514 23:50:44.272041 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.372744 kubelet[2988]: E0514 23:50:44.372672 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.473315 kubelet[2988]: E0514 23:50:44.473194 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.574072 kubelet[2988]: E0514 23:50:44.574032 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.674578 kubelet[2988]: E0514 23:50:44.674541 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.775449 kubelet[2988]: E0514 23:50:44.775311 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.875728 kubelet[2988]: E0514 23:50:44.875673 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:44.976159 kubelet[2988]: E0514 23:50:44.976103 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.076950 kubelet[2988]: E0514 23:50:45.076911 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.177874 kubelet[2988]: E0514 23:50:45.177825 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.278475 kubelet[2988]: E0514 23:50:45.278438 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.379186 kubelet[2988]: E0514 23:50:45.379070 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.479572 kubelet[2988]: E0514 23:50:45.479530 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.580427 kubelet[2988]: E0514 23:50:45.580386 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.680927 kubelet[2988]: E0514 23:50:45.680812 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.781256 kubelet[2988]: E0514 23:50:45.781216 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:45.881718 kubelet[2988]: E0514 23:50:45.881649 2988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:46.020805 kubelet[2988]: I0514 23:50:46.020257 2988 apiserver.go:52] "Watching apiserver" May 14 23:50:46.033513 kubelet[2988]: I0514 23:50:46.033453 2988 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:50:46.923910 systemd[1]: Reload requested from client PID 3264 ('systemctl') (unit session-9.scope)... May 14 23:50:46.923964 systemd[1]: Reloading... May 14 23:50:47.054837 zram_generator::config[3314]: No configuration found. May 14 23:50:47.182275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:50:47.297384 systemd[1]: Reloading finished in 373 ms. May 14 23:50:47.325722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:47.340792 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:50:47.341047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:47.341125 systemd[1]: kubelet.service: Consumed 1.367s CPU time, 115.8M memory peak. May 14 23:50:47.346897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:47.556043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:47.560955 (kubelet)[3377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:50:47.607571 kubelet[3377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:47.607571 kubelet[3377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:50:47.607571 kubelet[3377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:47.607571 kubelet[3377]: I0514 23:50:47.607619 3377 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:50:47.617202 kubelet[3377]: I0514 23:50:47.617173 3377 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:50:47.617418 kubelet[3377]: I0514 23:50:47.617255 3377 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:50:47.617809 kubelet[3377]: I0514 23:50:47.617645 3377 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:50:47.619873 kubelet[3377]: I0514 23:50:47.619845 3377 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:50:47.623648 kubelet[3377]: I0514 23:50:47.623364 3377 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:50:47.628250 kubelet[3377]: E0514 23:50:47.627912 3377 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:50:47.628250 kubelet[3377]: I0514 23:50:47.627946 3377 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:50:47.630878 kubelet[3377]: I0514 23:50:47.630851 3377 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:50:47.630974 kubelet[3377]: I0514 23:50:47.630956 3377 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:50:47.631078 kubelet[3377]: I0514 23:50:47.631046 3377 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:50:47.631231 kubelet[3377]: I0514 23:50:47.631074 3377 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-00beb67e77","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:50:47.631301 kubelet[3377]: I0514 23:50:47.631236 3377 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:50:47.631301 kubelet[3377]: I0514 23:50:47.631245 3377 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:50:47.631301 kubelet[3377]: I0514 23:50:47.631273 3377 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:47.631518 kubelet[3377]: I0514 23:50:47.631367 3377 kubelet.go:408] "Attempting to sync node with API server" May 14 23:50:47.631518 kubelet[3377]: I0514 23:50:47.631387 3377 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:50:47.631518 kubelet[3377]: I0514 23:50:47.631407 3377 kubelet.go:314] "Adding apiserver pod source" May 14 23:50:47.631518 kubelet[3377]: I0514 23:50:47.631416 3377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:50:47.635997 kubelet[3377]: I0514 23:50:47.635944 3377 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:50:47.636931 kubelet[3377]: I0514 23:50:47.636914 3377 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:50:47.638975 kubelet[3377]: I0514 23:50:47.638959 3377 server.go:1269] "Started kubelet" May 14 23:50:47.646193 kubelet[3377]: I0514 23:50:47.644501 3377 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:50:47.647705 kubelet[3377]: I0514 23:50:47.647065 3377 server.go:460] "Adding debug handlers to kubelet server" May 14 23:50:47.650700 kubelet[3377]: I0514 23:50:47.648444 3377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:50:47.650700 kubelet[3377]: I0514 23:50:47.648648 3377 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:50:47.650700 kubelet[3377]: I0514 23:50:47.650307 3377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:50:47.659688 kubelet[3377]: I0514 23:50:47.659656 3377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:50:47.663066 kubelet[3377]: I0514 23:50:47.663035 3377 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:50:47.663257 kubelet[3377]: E0514 23:50:47.663237 3377 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-00beb67e77\" not found" May 14 23:50:47.672042 kubelet[3377]: I0514 23:50:47.672024 3377 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:50:47.672603 kubelet[3377]: I0514 23:50:47.672257 3377 reconciler.go:26] "Reconciler: start to sync state" May 14 23:50:47.675893 kubelet[3377]: I0514 23:50:47.675871 3377 factory.go:221] Registration of the systemd container factory successfully May 14 23:50:47.676070 kubelet[3377]: I0514 23:50:47.676051 3377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:50:47.676474 kubelet[3377]: I0514 23:50:47.676449 3377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:50:47.677680 kubelet[3377]: I0514 23:50:47.677661 3377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:50:47.678315 kubelet[3377]: I0514 23:50:47.678303 3377 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:50:47.678741 kubelet[3377]: I0514 23:50:47.678373 3377 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:50:47.678741 kubelet[3377]: E0514 23:50:47.678419 3377 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:50:47.680592 kubelet[3377]: E0514 23:50:47.680559 3377 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:50:47.688072 kubelet[3377]: I0514 23:50:47.688051 3377 factory.go:221] Registration of the containerd container factory successfully May 14 23:50:47.741234 kubelet[3377]: I0514 23:50:47.741199 3377 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:50:47.741234 kubelet[3377]: I0514 23:50:47.741221 3377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:50:47.741234 kubelet[3377]: I0514 23:50:47.741242 3377 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:47.741405 kubelet[3377]: I0514 23:50:47.741395 3377 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:50:47.741432 kubelet[3377]: I0514 23:50:47.741406 3377 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:50:47.741432 kubelet[3377]: I0514 23:50:47.741423 3377 policy_none.go:49] "None policy: Start" May 14 23:50:47.742166 kubelet[3377]: I0514 23:50:47.742135 3377 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:50:47.742166 kubelet[3377]: I0514 23:50:47.742160 3377 state_mem.go:35] "Initializing new in-memory state store" May 14 23:50:47.742369 kubelet[3377]: I0514 23:50:47.742347 3377 state_mem.go:75] "Updated machine memory state" May 14 23:50:47.746236 kubelet[3377]: I0514 23:50:47.746204 3377 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:50:47.746388 kubelet[3377]: I0514 23:50:47.746368 3377 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:50:47.746420 kubelet[3377]: I0514 23:50:47.746386 3377 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:50:47.747055 kubelet[3377]: I0514 23:50:47.746928 3377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:50:47.789382 kubelet[3377]: W0514 23:50:47.789338 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:50:47.798599 kubelet[3377]: W0514 23:50:47.798563 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:50:47.798923 kubelet[3377]: W0514 23:50:47.798802 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:50:47.849547 kubelet[3377]: I0514 23:50:47.849437 3377 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:47.863068 kubelet[3377]: I0514 23:50:47.863030 3377 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:47.863146 kubelet[3377]: I0514 23:50:47.863102 3377 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873378 kubelet[3377]: I0514 23:50:47.873147 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccfdb9987a6789fa2ac2564ecaa6a5f6-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-00beb67e77\" (UID: \"ccfdb9987a6789fa2ac2564ecaa6a5f6\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873378 kubelet[3377]: I0514 23:50:47.873179 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873378 kubelet[3377]: I0514 23:50:47.873198 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873378 kubelet[3377]: I0514 23:50:47.873217 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873378 kubelet[3377]: I0514 23:50:47.873236 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873551 kubelet[3377]: I0514 23:50:47.873255 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02d3ecdbd73a09fdd2876e7eb086efcd-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-00beb67e77\" (UID: \"02d3ecdbd73a09fdd2876e7eb086efcd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873551 kubelet[3377]: I0514 23:50:47.873271 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02d3ecdbd73a09fdd2876e7eb086efcd-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-00beb67e77\" (UID: \"02d3ecdbd73a09fdd2876e7eb086efcd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873551 kubelet[3377]: I0514 23:50:47.873287 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02d3ecdbd73a09fdd2876e7eb086efcd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-00beb67e77\" (UID: \"02d3ecdbd73a09fdd2876e7eb086efcd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.873551 kubelet[3377]: I0514 23:50:47.873303 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa9ec0e0b055e65ff68deb6fb3f4a288-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-00beb67e77\" (UID: \"fa9ec0e0b055e65ff68deb6fb3f4a288\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" May 14 23:50:47.946679 sudo[3406]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:50:47.947307 sudo[3406]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:50:48.413045 sudo[3406]: pam_unix(sudo:session): session closed for user root May 14 23:50:48.633564 kubelet[3377]: I0514 23:50:48.633344 3377 apiserver.go:52] "Watching apiserver" May 14 23:50:48.673309 kubelet[3377]: I0514 23:50:48.673050 3377 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:50:48.729533 kubelet[3377]: W0514 23:50:48.729257 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:50:48.730036 kubelet[3377]: E0514 23:50:48.729427 3377 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-00beb67e77\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" May 14 23:50:48.763985 kubelet[3377]: I0514 23:50:48.762982 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-00beb67e77" podStartSLOduration=1.762949042 podStartE2EDuration="1.762949042s" podCreationTimestamp="2025-05-14 23:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:48.74952598 +0000 UTC m=+1.182874229" watchObservedRunningTime="2025-05-14 23:50:48.762949042 +0000 UTC m=+1.196297291" May 14 23:50:48.791089 kubelet[3377]: I0514 23:50:48.791020 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-00beb67e77" podStartSLOduration=1.7910031640000001 podStartE2EDuration="1.791003164s" podCreationTimestamp="2025-05-14 23:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:48.763314961 +0000 UTC m=+1.196663210" watchObservedRunningTime="2025-05-14 23:50:48.791003164 +0000 UTC m=+1.224351413" May 14 23:50:48.806509 kubelet[3377]: I0514 23:50:48.806259 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-00beb67e77" podStartSLOduration=1.806243783 podStartE2EDuration="1.806243783s" podCreationTimestamp="2025-05-14 23:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:48.791934522 +0000 UTC m=+1.225282731" watchObservedRunningTime="2025-05-14 23:50:48.806243783 +0000 UTC m=+1.239592032" May 14 23:50:49.572834 sudo[2310]: pam_unix(sudo:session): session closed for user root May 14 23:50:49.660096 sshd[2309]: Connection closed by 10.200.16.10 port 45802 May 14 23:50:49.661578 sshd-session[2307]: pam_unix(sshd:session): session closed for user core May 14 23:50:49.665311 systemd-logind[1717]: Session 9 logged out. Waiting for processes to exit. May 14 23:50:49.665813 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:45802.service: Deactivated successfully. May 14 23:50:49.668343 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:50:49.668719 systemd[1]: session-9.scope: Consumed 6.944s CPU time, 257.3M memory peak. May 14 23:50:49.670255 systemd-logind[1717]: Removed session 9. May 14 23:50:51.271432 kubelet[3377]: I0514 23:50:51.271402 3377 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:50:51.271854 containerd[1746]: time="2025-05-14T23:50:51.271753827Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:50:51.272235 kubelet[3377]: I0514 23:50:51.271959 3377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:50:52.310133 systemd[1]: Created slice kubepods-besteffort-pod353bdbd7_f58b_4322_885a_aeaade180dc0.slice - libcontainer container kubepods-besteffort-pod353bdbd7_f58b_4322_885a_aeaade180dc0.slice. May 14 23:50:52.338600 systemd[1]: Created slice kubepods-burstable-podda6ff1a1_b772_4add_93cb_3d95d098d2dc.slice - libcontainer container kubepods-burstable-podda6ff1a1_b772_4add_93cb_3d95d098d2dc.slice. May 14 23:50:52.400905 kubelet[3377]: I0514 23:50:52.400864 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/353bdbd7-f58b-4322-885a-aeaade180dc0-lib-modules\") pod \"kube-proxy-gh2ds\" (UID: \"353bdbd7-f58b-4322-885a-aeaade180dc0\") " pod="kube-system/kube-proxy-gh2ds" May 14 23:50:52.400905 kubelet[3377]: I0514 23:50:52.400902 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-bpf-maps\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401273 kubelet[3377]: I0514 23:50:52.400921 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-xtables-lock\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401273 kubelet[3377]: I0514 23:50:52.400936 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-lib-modules\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401273 kubelet[3377]: I0514 23:50:52.400951 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-net\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401273 kubelet[3377]: I0514 23:50:52.400966 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/353bdbd7-f58b-4322-885a-aeaade180dc0-kube-proxy\") pod \"kube-proxy-gh2ds\" (UID: \"353bdbd7-f58b-4322-885a-aeaade180dc0\") " pod="kube-system/kube-proxy-gh2ds" May 14 23:50:52.401273 kubelet[3377]: I0514 23:50:52.400980 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djgxf\" (UniqueName: \"kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-kube-api-access-djgxf\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401273 kubelet[3377]: I0514 23:50:52.401000 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cni-path\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401442 kubelet[3377]: I0514 23:50:52.401014 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-kernel\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401442 kubelet[3377]: I0514 23:50:52.401028 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrnw\" (UniqueName: \"kubernetes.io/projected/353bdbd7-f58b-4322-885a-aeaade180dc0-kube-api-access-ptrnw\") pod \"kube-proxy-gh2ds\" (UID: \"353bdbd7-f58b-4322-885a-aeaade180dc0\") " pod="kube-system/kube-proxy-gh2ds" May 14 23:50:52.401442 kubelet[3377]: I0514 23:50:52.401044 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hostproc\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401442 kubelet[3377]: I0514 23:50:52.401069 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-cgroup\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401442 kubelet[3377]: I0514 23:50:52.401083 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-etc-cni-netd\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401442 kubelet[3377]: I0514 23:50:52.401098 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hubble-tls\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401559 kubelet[3377]: I0514 23:50:52.401112 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-run\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401559 kubelet[3377]: I0514 23:50:52.401126 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da6ff1a1-b772-4add-93cb-3d95d098d2dc-clustermesh-secrets\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401559 kubelet[3377]: I0514 23:50:52.401143 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-config-path\") pod \"cilium-6vx6m\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " pod="kube-system/cilium-6vx6m" May 14 23:50:52.401559 kubelet[3377]: I0514 23:50:52.401160 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/353bdbd7-f58b-4322-885a-aeaade180dc0-xtables-lock\") pod \"kube-proxy-gh2ds\" (UID: \"353bdbd7-f58b-4322-885a-aeaade180dc0\") " pod="kube-system/kube-proxy-gh2ds" May 14 23:50:52.479361 systemd[1]: Created slice kubepods-besteffort-poda30f8932_940d_41e2_a7ce_134b00bf58ee.slice - libcontainer container kubepods-besteffort-poda30f8932_940d_41e2_a7ce_134b00bf58ee.slice. May 14 23:50:52.501685 kubelet[3377]: I0514 23:50:52.501645 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxx8v\" (UniqueName: \"kubernetes.io/projected/a30f8932-940d-41e2-a7ce-134b00bf58ee-kube-api-access-pxx8v\") pod \"cilium-operator-5d85765b45-w27rf\" (UID: \"a30f8932-940d-41e2-a7ce-134b00bf58ee\") " pod="kube-system/cilium-operator-5d85765b45-w27rf" May 14 23:50:52.501685 kubelet[3377]: I0514 23:50:52.501933 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a30f8932-940d-41e2-a7ce-134b00bf58ee-cilium-config-path\") pod \"cilium-operator-5d85765b45-w27rf\" (UID: \"a30f8932-940d-41e2-a7ce-134b00bf58ee\") " pod="kube-system/cilium-operator-5d85765b45-w27rf" May 14 23:50:52.617505 containerd[1746]: time="2025-05-14T23:50:52.617394317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gh2ds,Uid:353bdbd7-f58b-4322-885a-aeaade180dc0,Namespace:kube-system,Attempt:0,}" May 14 23:50:52.642592 containerd[1746]: time="2025-05-14T23:50:52.642556963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vx6m,Uid:da6ff1a1-b772-4add-93cb-3d95d098d2dc,Namespace:kube-system,Attempt:0,}" May 14 23:50:52.656728 containerd[1746]: time="2025-05-14T23:50:52.656464744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:52.656728 containerd[1746]: time="2025-05-14T23:50:52.656513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:52.656728 containerd[1746]: time="2025-05-14T23:50:52.656527864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:52.657298 containerd[1746]: time="2025-05-14T23:50:52.657250863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:52.678612 systemd[1]: Started cri-containerd-21ad5ba228ea185fee8713cd00f6b20dfabd6224073de0145ad9dc911bb95b28.scope - libcontainer container 21ad5ba228ea185fee8713cd00f6b20dfabd6224073de0145ad9dc911bb95b28. May 14 23:50:52.688308 containerd[1746]: time="2025-05-14T23:50:52.688104661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:52.688308 containerd[1746]: time="2025-05-14T23:50:52.688238181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:52.688308 containerd[1746]: time="2025-05-14T23:50:52.688261021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:52.689002 containerd[1746]: time="2025-05-14T23:50:52.688904180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:52.707831 containerd[1746]: time="2025-05-14T23:50:52.707683674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gh2ds,Uid:353bdbd7-f58b-4322-885a-aeaade180dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"21ad5ba228ea185fee8713cd00f6b20dfabd6224073de0145ad9dc911bb95b28\"" May 14 23:50:52.710875 systemd[1]: Started cri-containerd-321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3.scope - libcontainer container 321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3. May 14 23:50:52.712759 containerd[1746]: time="2025-05-14T23:50:52.712335828Z" level=info msg="CreateContainer within sandbox \"21ad5ba228ea185fee8713cd00f6b20dfabd6224073de0145ad9dc911bb95b28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:50:52.740171 containerd[1746]: time="2025-05-14T23:50:52.740130030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vx6m,Uid:da6ff1a1-b772-4add-93cb-3d95d098d2dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\"" May 14 23:50:52.742940 containerd[1746]: time="2025-05-14T23:50:52.742891827Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:50:52.769511 containerd[1746]: time="2025-05-14T23:50:52.769470950Z" level=info msg="CreateContainer within sandbox \"21ad5ba228ea185fee8713cd00f6b20dfabd6224073de0145ad9dc911bb95b28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"beb0c47f66820e2f336b32e0d049cd54b6052ea29342b36b8d58d81a3a19e9a4\"" May 14 23:50:52.770158 containerd[1746]: time="2025-05-14T23:50:52.770132310Z" level=info msg="StartContainer for \"beb0c47f66820e2f336b32e0d049cd54b6052ea29342b36b8d58d81a3a19e9a4\"" May 14 23:50:52.786953 containerd[1746]: time="2025-05-14T23:50:52.786367927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w27rf,Uid:a30f8932-940d-41e2-a7ce-134b00bf58ee,Namespace:kube-system,Attempt:0,}" May 14 23:50:52.793848 systemd[1]: Started cri-containerd-beb0c47f66820e2f336b32e0d049cd54b6052ea29342b36b8d58d81a3a19e9a4.scope - libcontainer container beb0c47f66820e2f336b32e0d049cd54b6052ea29342b36b8d58d81a3a19e9a4. May 14 23:50:52.830378 containerd[1746]: time="2025-05-14T23:50:52.830336668Z" level=info msg="StartContainer for \"beb0c47f66820e2f336b32e0d049cd54b6052ea29342b36b8d58d81a3a19e9a4\" returns successfully" May 14 23:50:52.838621 containerd[1746]: time="2025-05-14T23:50:52.837845537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:52.838621 containerd[1746]: time="2025-05-14T23:50:52.838521097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:52.838939 containerd[1746]: time="2025-05-14T23:50:52.838549136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:52.840056 containerd[1746]: time="2025-05-14T23:50:52.839618615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:52.855956 systemd[1]: Started cri-containerd-80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68.scope - libcontainer container 80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68. May 14 23:50:52.894463 containerd[1746]: time="2025-05-14T23:50:52.893828741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w27rf,Uid:a30f8932-940d-41e2-a7ce-134b00bf58ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68\"" May 14 23:50:53.762253 kubelet[3377]: I0514 23:50:53.762168 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gh2ds" podStartSLOduration=1.76215168 podStartE2EDuration="1.76215168s" podCreationTimestamp="2025-05-14 23:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:53.74796222 +0000 UTC m=+6.181310509" watchObservedRunningTime="2025-05-14 23:50:53.76215168 +0000 UTC m=+6.195499929" May 14 23:50:57.967663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183040654.mount: Deactivated successfully. May 14 23:51:00.204954 containerd[1746]: time="2025-05-14T23:51:00.204894638Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:00.208170 containerd[1746]: time="2025-05-14T23:51:00.208099992Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 23:51:00.211886 containerd[1746]: time="2025-05-14T23:51:00.211840025Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:00.213608 containerd[1746]: time="2025-05-14T23:51:00.213451863Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.470508076s" May 14 23:51:00.213608 containerd[1746]: time="2025-05-14T23:51:00.213482663Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 23:51:00.215336 containerd[1746]: time="2025-05-14T23:51:00.214662540Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:51:00.217058 containerd[1746]: time="2025-05-14T23:51:00.217021896Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:51:00.258192 containerd[1746]: time="2025-05-14T23:51:00.258107863Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\"" May 14 23:51:00.259404 containerd[1746]: time="2025-05-14T23:51:00.259320981Z" level=info msg="StartContainer for \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\"" May 14 23:51:00.286917 systemd[1]: Started cri-containerd-d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f.scope - libcontainer container d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f. May 14 23:51:00.312215 containerd[1746]: time="2025-05-14T23:51:00.312161366Z" level=info msg="StartContainer for \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\" returns successfully" May 14 23:51:00.317971 systemd[1]: cri-containerd-d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f.scope: Deactivated successfully. May 14 23:51:01.244381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f-rootfs.mount: Deactivated successfully. May 14 23:51:01.485958 containerd[1746]: time="2025-05-14T23:51:01.485873273Z" level=info msg="shim disconnected" id=d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f namespace=k8s.io May 14 23:51:01.485958 containerd[1746]: time="2025-05-14T23:51:01.485929752Z" level=warning msg="cleaning up after shim disconnected" id=d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f namespace=k8s.io May 14 23:51:01.485958 containerd[1746]: time="2025-05-14T23:51:01.485941112Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:01.756446 containerd[1746]: time="2025-05-14T23:51:01.756328990Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:51:01.791897 containerd[1746]: time="2025-05-14T23:51:01.791846687Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\"" May 14 23:51:01.792461 containerd[1746]: time="2025-05-14T23:51:01.792409606Z" level=info msg="StartContainer for \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\"" May 14 23:51:01.814887 systemd[1]: Started cri-containerd-70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945.scope - libcontainer container 70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945. May 14 23:51:01.842038 containerd[1746]: time="2025-05-14T23:51:01.841906197Z" level=info msg="StartContainer for \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\" returns successfully" May 14 23:51:01.849907 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:51:01.850145 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:51:01.850528 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:51:01.859754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:51:01.859929 systemd[1]: cri-containerd-70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945.scope: Deactivated successfully. May 14 23:51:01.876425 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:51:01.888980 containerd[1746]: time="2025-05-14T23:51:01.888869874Z" level=info msg="shim disconnected" id=70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945 namespace=k8s.io May 14 23:51:01.888980 containerd[1746]: time="2025-05-14T23:51:01.888945273Z" level=warning msg="cleaning up after shim disconnected" id=70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945 namespace=k8s.io May 14 23:51:01.888980 containerd[1746]: time="2025-05-14T23:51:01.888955393Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:02.244579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945-rootfs.mount: Deactivated successfully. May 14 23:51:02.760499 containerd[1746]: time="2025-05-14T23:51:02.760221719Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:51:02.819267 containerd[1746]: time="2025-05-14T23:51:02.819204214Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\"" May 14 23:51:02.821722 containerd[1746]: time="2025-05-14T23:51:02.820953091Z" level=info msg="StartContainer for \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\"" May 14 23:51:02.851929 systemd[1]: Started cri-containerd-52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132.scope - libcontainer container 52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132. May 14 23:51:02.881529 systemd[1]: cri-containerd-52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132.scope: Deactivated successfully. May 14 23:51:02.888978 containerd[1746]: time="2025-05-14T23:51:02.888924035Z" level=info msg="StartContainer for \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\" returns successfully" May 14 23:51:02.927132 containerd[1746]: time="2025-05-14T23:51:02.926933383Z" level=info msg="shim disconnected" id=52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132 namespace=k8s.io May 14 23:51:02.927132 containerd[1746]: time="2025-05-14T23:51:02.926985783Z" level=warning msg="cleaning up after shim disconnected" id=52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132 namespace=k8s.io May 14 23:51:02.927132 containerd[1746]: time="2025-05-14T23:51:02.926993983Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:03.244439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132-rootfs.mount: Deactivated successfully. May 14 23:51:03.765054 containerd[1746]: time="2025-05-14T23:51:03.764807473Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:51:03.806638 containerd[1746]: time="2025-05-14T23:51:03.806600735Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\"" May 14 23:51:03.807585 containerd[1746]: time="2025-05-14T23:51:03.807549214Z" level=info msg="StartContainer for \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\"" May 14 23:51:03.834851 systemd[1]: Started cri-containerd-72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac.scope - libcontainer container 72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac. May 14 23:51:03.856424 systemd[1]: cri-containerd-72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac.scope: Deactivated successfully. May 14 23:51:03.860980 containerd[1746]: time="2025-05-14T23:51:03.860791661Z" level=info msg="StartContainer for \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\" returns successfully" May 14 23:51:03.900359 containerd[1746]: time="2025-05-14T23:51:03.900284167Z" level=info msg="shim disconnected" id=72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac namespace=k8s.io May 14 23:51:03.900359 containerd[1746]: time="2025-05-14T23:51:03.900351007Z" level=warning msg="cleaning up after shim disconnected" id=72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac namespace=k8s.io May 14 23:51:03.900359 containerd[1746]: time="2025-05-14T23:51:03.900359687Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:04.244495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac-rootfs.mount: Deactivated successfully. May 14 23:51:04.274978 containerd[1746]: time="2025-05-14T23:51:04.274926092Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:04.278324 containerd[1746]: time="2025-05-14T23:51:04.278289328Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 23:51:04.282959 containerd[1746]: time="2025-05-14T23:51:04.282900081Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:04.284247 containerd[1746]: time="2025-05-14T23:51:04.284219520Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.06952446s" May 14 23:51:04.284431 containerd[1746]: time="2025-05-14T23:51:04.284341999Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 23:51:04.287900 containerd[1746]: time="2025-05-14T23:51:04.287743635Z" level=info msg="CreateContainer within sandbox \"80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:51:04.323140 containerd[1746]: time="2025-05-14T23:51:04.323095906Z" level=info msg="CreateContainer within sandbox \"80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\"" May 14 23:51:04.323663 containerd[1746]: time="2025-05-14T23:51:04.323621305Z" level=info msg="StartContainer for \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\"" May 14 23:51:04.349841 systemd[1]: Started cri-containerd-91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0.scope - libcontainer container 91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0. May 14 23:51:04.374189 containerd[1746]: time="2025-05-14T23:51:04.374135796Z" level=info msg="StartContainer for \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\" returns successfully" May 14 23:51:04.773356 containerd[1746]: time="2025-05-14T23:51:04.773299688Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:51:04.783904 kubelet[3377]: I0514 23:51:04.783512 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-w27rf" podStartSLOduration=1.395431852 podStartE2EDuration="12.783495194s" podCreationTimestamp="2025-05-14 23:50:52 +0000 UTC" firstStartedPulling="2025-05-14 23:50:52.896936017 +0000 UTC m=+5.330284266" lastFinishedPulling="2025-05-14 23:51:04.284999399 +0000 UTC m=+16.718347608" observedRunningTime="2025-05-14 23:51:04.781324237 +0000 UTC m=+17.214672486" watchObservedRunningTime="2025-05-14 23:51:04.783495194 +0000 UTC m=+17.216843443" May 14 23:51:04.822165 containerd[1746]: time="2025-05-14T23:51:04.822110501Z" level=info msg="CreateContainer within sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\"" May 14 23:51:04.825176 containerd[1746]: time="2025-05-14T23:51:04.824947297Z" level=info msg="StartContainer for \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\"" May 14 23:51:04.857888 systemd[1]: Started cri-containerd-679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b.scope - libcontainer container 679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b. May 14 23:51:04.932566 containerd[1746]: time="2025-05-14T23:51:04.932510789Z" level=info msg="StartContainer for \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\" returns successfully" May 14 23:51:05.120834 kubelet[3377]: I0514 23:51:05.120807 3377 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 23:51:05.196804 systemd[1]: Created slice kubepods-burstable-pod6d2722fa_0c06_4d1f_ab9a_9f8397aceb1e.slice - libcontainer container kubepods-burstable-pod6d2722fa_0c06_4d1f_ab9a_9f8397aceb1e.slice. May 14 23:51:05.210109 systemd[1]: Created slice kubepods-burstable-podd932f16e_8f93_4027_b045_e100edf8d869.slice - libcontainer container kubepods-burstable-podd932f16e_8f93_4027_b045_e100edf8d869.slice. May 14 23:51:05.277953 kubelet[3377]: I0514 23:51:05.277903 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfz7\" (UniqueName: \"kubernetes.io/projected/d932f16e-8f93-4027-b045-e100edf8d869-kube-api-access-5dfz7\") pod \"coredns-6f6b679f8f-6t8dd\" (UID: \"d932f16e-8f93-4027-b045-e100edf8d869\") " pod="kube-system/coredns-6f6b679f8f-6t8dd" May 14 23:51:05.278086 kubelet[3377]: I0514 23:51:05.277956 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44bm\" (UniqueName: \"kubernetes.io/projected/6d2722fa-0c06-4d1f-ab9a-9f8397aceb1e-kube-api-access-z44bm\") pod \"coredns-6f6b679f8f-99vmb\" (UID: \"6d2722fa-0c06-4d1f-ab9a-9f8397aceb1e\") " pod="kube-system/coredns-6f6b679f8f-99vmb" May 14 23:51:05.278086 kubelet[3377]: I0514 23:51:05.278048 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d932f16e-8f93-4027-b045-e100edf8d869-config-volume\") pod \"coredns-6f6b679f8f-6t8dd\" (UID: \"d932f16e-8f93-4027-b045-e100edf8d869\") " pod="kube-system/coredns-6f6b679f8f-6t8dd" May 14 23:51:05.278086 kubelet[3377]: I0514 23:51:05.278073 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d2722fa-0c06-4d1f-ab9a-9f8397aceb1e-config-volume\") pod \"coredns-6f6b679f8f-99vmb\" (UID: \"6d2722fa-0c06-4d1f-ab9a-9f8397aceb1e\") " pod="kube-system/coredns-6f6b679f8f-99vmb" May 14 23:51:05.501264 containerd[1746]: time="2025-05-14T23:51:05.500857329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-99vmb,Uid:6d2722fa-0c06-4d1f-ab9a-9f8397aceb1e,Namespace:kube-system,Attempt:0,}" May 14 23:51:05.515307 containerd[1746]: time="2025-05-14T23:51:05.515271269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6t8dd,Uid:d932f16e-8f93-4027-b045-e100edf8d869,Namespace:kube-system,Attempt:0,}" May 14 23:51:05.796585 kubelet[3377]: I0514 23:51:05.796409 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6vx6m" podStartSLOduration=6.32316428 podStartE2EDuration="13.796320832s" podCreationTimestamp="2025-05-14 23:50:52 +0000 UTC" firstStartedPulling="2025-05-14 23:50:52.741305229 +0000 UTC m=+5.174653478" lastFinishedPulling="2025-05-14 23:51:00.214461781 +0000 UTC m=+12.647810030" observedRunningTime="2025-05-14 23:51:05.795995753 +0000 UTC m=+18.229344002" watchObservedRunningTime="2025-05-14 23:51:05.796320832 +0000 UTC m=+18.229669081" May 14 23:51:08.121865 systemd-networkd[1343]: cilium_host: Link UP May 14 23:51:08.122008 systemd-networkd[1343]: cilium_net: Link UP May 14 23:51:08.122122 systemd-networkd[1343]: cilium_net: Gained carrier May 14 23:51:08.122228 systemd-networkd[1343]: cilium_host: Gained carrier May 14 23:51:08.304868 systemd-networkd[1343]: cilium_vxlan: Link UP May 14 23:51:08.304877 systemd-networkd[1343]: cilium_vxlan: Gained carrier May 14 23:51:08.677734 kernel: NET: Registered PF_ALG protocol family May 14 23:51:08.979814 systemd-networkd[1343]: cilium_net: Gained IPv6LL May 14 23:51:09.107831 systemd-networkd[1343]: cilium_host: Gained IPv6LL May 14 23:51:09.385815 systemd-networkd[1343]: lxc_health: Link UP May 14 23:51:09.389074 systemd-networkd[1343]: lxc_health: Gained carrier May 14 23:51:09.572263 systemd-networkd[1343]: lxc2412438ff2c7: Link UP May 14 23:51:09.585953 kernel: eth0: renamed from tmpdb94f May 14 23:51:09.594421 systemd-networkd[1343]: lxc2412438ff2c7: Gained carrier May 14 23:51:09.602805 systemd-networkd[1343]: lxc500b3bc9c7a0: Link UP May 14 23:51:09.611714 kernel: eth0: renamed from tmp84a9e May 14 23:51:09.617952 systemd-networkd[1343]: lxc500b3bc9c7a0: Gained carrier May 14 23:51:09.747864 systemd-networkd[1343]: cilium_vxlan: Gained IPv6LL May 14 23:51:10.963827 systemd-networkd[1343]: lxc500b3bc9c7a0: Gained IPv6LL May 14 23:51:11.091840 systemd-networkd[1343]: lxc2412438ff2c7: Gained IPv6LL May 14 23:51:11.155822 systemd-networkd[1343]: lxc_health: Gained IPv6LL May 14 23:51:13.136712 containerd[1746]: time="2025-05-14T23:51:13.136609337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:13.136712 containerd[1746]: time="2025-05-14T23:51:13.136672736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:13.137151 containerd[1746]: time="2025-05-14T23:51:13.136687736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:13.137151 containerd[1746]: time="2025-05-14T23:51:13.136803136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:13.168861 systemd[1]: Started cri-containerd-db94f247575a7d4b0c6d2990b785d68aea8b115cd6065482fc2a8fb2ca9932b4.scope - libcontainer container db94f247575a7d4b0c6d2990b785d68aea8b115cd6065482fc2a8fb2ca9932b4. May 14 23:51:13.176717 containerd[1746]: time="2025-05-14T23:51:13.174942592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:13.176717 containerd[1746]: time="2025-05-14T23:51:13.174997712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:13.177212 containerd[1746]: time="2025-05-14T23:51:13.177150149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:13.177488 containerd[1746]: time="2025-05-14T23:51:13.177433508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:13.208868 systemd[1]: Started cri-containerd-84a9e4e6ec6b8b9f9fa76c1cf9aa2ba27d829d2c9a4aa7a561575032f845282e.scope - libcontainer container 84a9e4e6ec6b8b9f9fa76c1cf9aa2ba27d829d2c9a4aa7a561575032f845282e. May 14 23:51:13.223717 containerd[1746]: time="2025-05-14T23:51:13.223667471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-99vmb,Uid:6d2722fa-0c06-4d1f-ab9a-9f8397aceb1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"db94f247575a7d4b0c6d2990b785d68aea8b115cd6065482fc2a8fb2ca9932b4\"" May 14 23:51:13.227602 containerd[1746]: time="2025-05-14T23:51:13.227564144Z" level=info msg="CreateContainer within sandbox \"db94f247575a7d4b0c6d2990b785d68aea8b115cd6065482fc2a8fb2ca9932b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:51:13.259621 containerd[1746]: time="2025-05-14T23:51:13.259569851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6t8dd,Uid:d932f16e-8f93-4027-b045-e100edf8d869,Namespace:kube-system,Attempt:0,} returns sandbox id \"84a9e4e6ec6b8b9f9fa76c1cf9aa2ba27d829d2c9a4aa7a561575032f845282e\"" May 14 23:51:13.267906 containerd[1746]: time="2025-05-14T23:51:13.267844797Z" level=info msg="CreateContainer within sandbox \"84a9e4e6ec6b8b9f9fa76c1cf9aa2ba27d829d2c9a4aa7a561575032f845282e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:51:13.269530 containerd[1746]: time="2025-05-14T23:51:13.269481754Z" level=info msg="CreateContainer within sandbox \"db94f247575a7d4b0c6d2990b785d68aea8b115cd6065482fc2a8fb2ca9932b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0610eff1366a33c01dcac7c71e6cf6cf19909f7991d12034ce9ecbac7615141\"" May 14 23:51:13.271425 containerd[1746]: time="2025-05-14T23:51:13.270534192Z" level=info msg="StartContainer for \"a0610eff1366a33c01dcac7c71e6cf6cf19909f7991d12034ce9ecbac7615141\"" May 14 23:51:13.302873 systemd[1]: Started cri-containerd-a0610eff1366a33c01dcac7c71e6cf6cf19909f7991d12034ce9ecbac7615141.scope - libcontainer container a0610eff1366a33c01dcac7c71e6cf6cf19909f7991d12034ce9ecbac7615141. May 14 23:51:13.308679 containerd[1746]: time="2025-05-14T23:51:13.308599928Z" level=info msg="CreateContainer within sandbox \"84a9e4e6ec6b8b9f9fa76c1cf9aa2ba27d829d2c9a4aa7a561575032f845282e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea3d91a8c06699d6910369a0f7d2047256d92936689f789c4ac8116bdb605684\"" May 14 23:51:13.309981 containerd[1746]: time="2025-05-14T23:51:13.309322607Z" level=info msg="StartContainer for \"ea3d91a8c06699d6910369a0f7d2047256d92936689f789c4ac8116bdb605684\"" May 14 23:51:13.342085 systemd[1]: Started cri-containerd-ea3d91a8c06699d6910369a0f7d2047256d92936689f789c4ac8116bdb605684.scope - libcontainer container ea3d91a8c06699d6910369a0f7d2047256d92936689f789c4ac8116bdb605684. May 14 23:51:13.344344 containerd[1746]: time="2025-05-14T23:51:13.344096869Z" level=info msg="StartContainer for \"a0610eff1366a33c01dcac7c71e6cf6cf19909f7991d12034ce9ecbac7615141\" returns successfully" May 14 23:51:13.373920 containerd[1746]: time="2025-05-14T23:51:13.373460460Z" level=info msg="StartContainer for \"ea3d91a8c06699d6910369a0f7d2047256d92936689f789c4ac8116bdb605684\" returns successfully" May 14 23:51:13.810544 kubelet[3377]: I0514 23:51:13.809962 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6t8dd" podStartSLOduration=21.809949027000002 podStartE2EDuration="21.809949027s" podCreationTimestamp="2025-05-14 23:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:13.809911948 +0000 UTC m=+26.243260197" watchObservedRunningTime="2025-05-14 23:51:13.809949027 +0000 UTC m=+26.243297276" May 14 23:51:13.830753 kubelet[3377]: I0514 23:51:13.829563 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-99vmb" podStartSLOduration=21.829548024 podStartE2EDuration="21.829548024s" podCreationTimestamp="2025-05-14 23:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:13.828750786 +0000 UTC m=+26.262099075" watchObservedRunningTime="2025-05-14 23:51:13.829548024 +0000 UTC m=+26.262896273" May 14 23:51:17.612753 kubelet[3377]: I0514 23:51:17.611876 3377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:52:52.465108 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:49422.service - OpenSSH per-connection server daemon (10.200.16.10:49422). May 14 23:52:52.954555 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 49422 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:52:52.956068 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:52.961159 systemd-logind[1717]: New session 10 of user core. May 14 23:52:52.966945 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:52:53.389818 sshd[4775]: Connection closed by 10.200.16.10 port 49422 May 14 23:52:53.390475 sshd-session[4771]: pam_unix(sshd:session): session closed for user core May 14 23:52:53.394350 systemd-logind[1717]: Session 10 logged out. Waiting for processes to exit. May 14 23:52:53.395107 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:49422.service: Deactivated successfully. May 14 23:52:53.397332 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:52:53.398950 systemd-logind[1717]: Removed session 10. May 14 23:52:58.484320 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:56426.service - OpenSSH per-connection server daemon (10.200.16.10:56426). May 14 23:52:58.932295 sshd[4788]: Accepted publickey for core from 10.200.16.10 port 56426 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:52:58.933732 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:58.938747 systemd-logind[1717]: New session 11 of user core. May 14 23:52:58.944934 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:52:59.340084 sshd[4790]: Connection closed by 10.200.16.10 port 56426 May 14 23:52:59.339576 sshd-session[4788]: pam_unix(sshd:session): session closed for user core May 14 23:52:59.342901 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:56426.service: Deactivated successfully. May 14 23:52:59.345124 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:52:59.346172 systemd-logind[1717]: Session 11 logged out. Waiting for processes to exit. May 14 23:52:59.347566 systemd-logind[1717]: Removed session 11. May 14 23:53:04.434099 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:56442.service - OpenSSH per-connection server daemon (10.200.16.10:56442). May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.434370 1726 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.434406 1726 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.434556 1726 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.434950 1726 omaha_request_params.cc:62] Current group set to beta May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.435030 1726 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.435038 1726 update_attempter.cc:643] Scheduling an action processor start. May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.435055 1726 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.435086 1726 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.435135 1726 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.435142 1726 omaha_request_action.cc:272] Request: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.435148 1726 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:53:04.437579 update_engine[1726]: I20250514 23:53:04.436232 1726 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:53:04.438636 update_engine[1726]: I20250514 23:53:04.436596 1726 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:53:04.438885 locksmithd[1768]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 23:53:04.477543 update_engine[1726]: E20250514 23:53:04.477487 1726 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:53:04.477832 update_engine[1726]: I20250514 23:53:04.477792 1726 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 23:53:04.922870 sshd[4803]: Accepted publickey for core from 10.200.16.10 port 56442 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:04.924416 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:04.928810 systemd-logind[1717]: New session 12 of user core. May 14 23:53:04.935928 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:53:05.345068 sshd[4805]: Connection closed by 10.200.16.10 port 56442 May 14 23:53:05.344556 sshd-session[4803]: pam_unix(sshd:session): session closed for user core May 14 23:53:05.348241 systemd-logind[1717]: Session 12 logged out. Waiting for processes to exit. May 14 23:53:05.348817 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:56442.service: Deactivated successfully. May 14 23:53:05.352443 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:53:05.353486 systemd-logind[1717]: Removed session 12. May 14 23:53:10.435943 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:54158.service - OpenSSH per-connection server daemon (10.200.16.10:54158). May 14 23:53:10.921259 sshd[4818]: Accepted publickey for core from 10.200.16.10 port 54158 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:10.922584 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:10.927467 systemd-logind[1717]: New session 13 of user core. May 14 23:53:10.935883 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:53:11.340528 sshd[4820]: Connection closed by 10.200.16.10 port 54158 May 14 23:53:11.340927 sshd-session[4818]: pam_unix(sshd:session): session closed for user core May 14 23:53:11.345335 systemd-logind[1717]: Session 13 logged out. Waiting for processes to exit. May 14 23:53:11.345979 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:54158.service: Deactivated successfully. May 14 23:53:11.348408 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:53:11.350011 systemd-logind[1717]: Removed session 13. May 14 23:53:11.423571 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:54166.service - OpenSSH per-connection server daemon (10.200.16.10:54166). May 14 23:53:11.878772 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 54166 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:11.880258 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:11.884227 systemd-logind[1717]: New session 14 of user core. May 14 23:53:11.890850 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:53:12.301148 sshd[4834]: Connection closed by 10.200.16.10 port 54166 May 14 23:53:12.301463 sshd-session[4832]: pam_unix(sshd:session): session closed for user core May 14 23:53:12.305387 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:54166.service: Deactivated successfully. May 14 23:53:12.308269 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:53:12.309465 systemd-logind[1717]: Session 14 logged out. Waiting for processes to exit. May 14 23:53:12.310495 systemd-logind[1717]: Removed session 14. May 14 23:53:12.390996 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:54182.service - OpenSSH per-connection server daemon (10.200.16.10:54182). May 14 23:53:12.870299 sshd[4843]: Accepted publickey for core from 10.200.16.10 port 54182 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:12.872677 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:12.877445 systemd-logind[1717]: New session 15 of user core. May 14 23:53:12.880854 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:53:13.280090 sshd[4845]: Connection closed by 10.200.16.10 port 54182 May 14 23:53:13.279401 sshd-session[4843]: pam_unix(sshd:session): session closed for user core May 14 23:53:13.283003 systemd-logind[1717]: Session 15 logged out. Waiting for processes to exit. May 14 23:53:13.283567 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:54182.service: Deactivated successfully. May 14 23:53:13.286376 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:53:13.287526 systemd-logind[1717]: Removed session 15. May 14 23:53:14.429902 update_engine[1726]: I20250514 23:53:14.429772 1726 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:53:14.430307 update_engine[1726]: I20250514 23:53:14.430081 1726 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:53:14.430387 update_engine[1726]: I20250514 23:53:14.430356 1726 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:53:14.452184 update_engine[1726]: E20250514 23:53:14.452129 1726 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:53:14.452287 update_engine[1726]: I20250514 23:53:14.452216 1726 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 23:53:18.368048 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:54194.service - OpenSSH per-connection server daemon (10.200.16.10:54194). May 14 23:53:18.821840 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 54194 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:18.823178 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:18.828562 systemd-logind[1717]: New session 16 of user core. May 14 23:53:18.835895 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:53:19.236806 sshd[4859]: Connection closed by 10.200.16.10 port 54194 May 14 23:53:19.236231 sshd-session[4857]: pam_unix(sshd:session): session closed for user core May 14 23:53:19.239989 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:54194.service: Deactivated successfully. May 14 23:53:19.242334 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:53:19.243201 systemd-logind[1717]: Session 16 logged out. Waiting for processes to exit. May 14 23:53:19.244408 systemd-logind[1717]: Removed session 16. May 14 23:53:24.323037 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:34992.service - OpenSSH per-connection server daemon (10.200.16.10:34992). May 14 23:53:24.435364 update_engine[1726]: I20250514 23:53:24.434825 1726 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:53:24.435364 update_engine[1726]: I20250514 23:53:24.435086 1726 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:53:24.435364 update_engine[1726]: I20250514 23:53:24.435322 1726 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:53:24.445916 update_engine[1726]: E20250514 23:53:24.445811 1726 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:53:24.445916 update_engine[1726]: I20250514 23:53:24.445891 1726 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 23:53:24.771770 sshd[4872]: Accepted publickey for core from 10.200.16.10 port 34992 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:24.773081 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:24.778815 systemd-logind[1717]: New session 17 of user core. May 14 23:53:24.785093 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:53:25.158766 sshd[4874]: Connection closed by 10.200.16.10 port 34992 May 14 23:53:25.159356 sshd-session[4872]: pam_unix(sshd:session): session closed for user core May 14 23:53:25.163178 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:34992.service: Deactivated successfully. May 14 23:53:25.166034 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:53:25.167244 systemd-logind[1717]: Session 17 logged out. Waiting for processes to exit. May 14 23:53:25.168647 systemd-logind[1717]: Removed session 17. May 14 23:53:25.249992 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:35000.service - OpenSSH per-connection server daemon (10.200.16.10:35000). May 14 23:53:25.732434 sshd[4885]: Accepted publickey for core from 10.200.16.10 port 35000 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:25.733825 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:25.737979 systemd-logind[1717]: New session 18 of user core. May 14 23:53:25.746879 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:53:26.172292 sshd[4887]: Connection closed by 10.200.16.10 port 35000 May 14 23:53:26.173077 sshd-session[4885]: pam_unix(sshd:session): session closed for user core May 14 23:53:26.176614 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:35000.service: Deactivated successfully. May 14 23:53:26.178738 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:53:26.179547 systemd-logind[1717]: Session 18 logged out. Waiting for processes to exit. May 14 23:53:26.180602 systemd-logind[1717]: Removed session 18. May 14 23:53:26.262955 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:35012.service - OpenSSH per-connection server daemon (10.200.16.10:35012). May 14 23:53:26.746187 sshd[4897]: Accepted publickey for core from 10.200.16.10 port 35012 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:26.747617 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:26.751680 systemd-logind[1717]: New session 19 of user core. May 14 23:53:26.759842 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:53:28.655541 sshd[4899]: Connection closed by 10.200.16.10 port 35012 May 14 23:53:28.656518 sshd-session[4897]: pam_unix(sshd:session): session closed for user core May 14 23:53:28.659708 systemd-logind[1717]: Session 19 logged out. Waiting for processes to exit. May 14 23:53:28.661656 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:35012.service: Deactivated successfully. May 14 23:53:28.663439 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:53:28.663664 systemd[1]: session-19.scope: Consumed 431ms CPU time, 67.3M memory peak. May 14 23:53:28.664593 systemd-logind[1717]: Removed session 19. May 14 23:53:28.755400 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:36078.service - OpenSSH per-connection server daemon (10.200.16.10:36078). May 14 23:53:29.246128 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 36078 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:29.247565 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:29.252752 systemd-logind[1717]: New session 20 of user core. May 14 23:53:29.258873 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:53:29.790893 sshd[4918]: Connection closed by 10.200.16.10 port 36078 May 14 23:53:29.791434 sshd-session[4916]: pam_unix(sshd:session): session closed for user core May 14 23:53:29.794766 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:36078.service: Deactivated successfully. May 14 23:53:29.796534 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:53:29.797284 systemd-logind[1717]: Session 20 logged out. Waiting for processes to exit. May 14 23:53:29.798656 systemd-logind[1717]: Removed session 20. May 14 23:53:29.879995 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:36088.service - OpenSSH per-connection server daemon (10.200.16.10:36088). May 14 23:53:30.337311 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 36088 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:30.338874 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:30.343901 systemd-logind[1717]: New session 21 of user core. May 14 23:53:30.349873 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:53:30.745085 sshd[4930]: Connection closed by 10.200.16.10 port 36088 May 14 23:53:30.745825 sshd-session[4928]: pam_unix(sshd:session): session closed for user core May 14 23:53:30.749642 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:36088.service: Deactivated successfully. May 14 23:53:30.752983 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:53:30.754299 systemd-logind[1717]: Session 21 logged out. Waiting for processes to exit. May 14 23:53:30.755267 systemd-logind[1717]: Removed session 21. May 14 23:53:34.432513 update_engine[1726]: I20250514 23:53:34.432451 1726 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:53:34.432952 update_engine[1726]: I20250514 23:53:34.432683 1726 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:53:34.433834 update_engine[1726]: I20250514 23:53:34.433738 1726 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:53:34.455661 update_engine[1726]: E20250514 23:53:34.455143 1726 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455246 1726 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455260 1726 omaha_request_action.cc:617] Omaha request response: May 14 23:53:34.455661 update_engine[1726]: E20250514 23:53:34.455346 1726 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455363 1726 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455369 1726 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455374 1726 update_attempter.cc:306] Processing Done. May 14 23:53:34.455661 update_engine[1726]: E20250514 23:53:34.455391 1726 update_attempter.cc:619] Update failed. May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455427 1726 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455431 1726 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455436 1726 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455503 1726 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455530 1726 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:53:34.455661 update_engine[1726]: I20250514 23:53:34.455536 1726 omaha_request_action.cc:272] Request: May 14 23:53:34.455661 update_engine[1726]: May 14 23:53:34.455661 update_engine[1726]: May 14 23:53:34.456225 update_engine[1726]: May 14 23:53:34.456225 update_engine[1726]: May 14 23:53:34.456225 update_engine[1726]: May 14 23:53:34.456225 update_engine[1726]: May 14 23:53:34.456225 update_engine[1726]: I20250514 23:53:34.455541 1726 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:53:34.456225 update_engine[1726]: I20250514 23:53:34.455796 1726 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:53:34.456225 update_engine[1726]: I20250514 23:53:34.456084 1726 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:53:34.456422 locksmithd[1768]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 23:53:34.559137 update_engine[1726]: E20250514 23:53:34.559080 1726 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:53:34.559261 update_engine[1726]: I20250514 23:53:34.559166 1726 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:53:34.559261 update_engine[1726]: I20250514 23:53:34.559175 1726 omaha_request_action.cc:617] Omaha request response: May 14 23:53:34.559261 update_engine[1726]: I20250514 23:53:34.559185 1726 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:53:34.559261 update_engine[1726]: I20250514 23:53:34.559217 1726 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:53:34.559261 update_engine[1726]: I20250514 23:53:34.559235 1726 update_attempter.cc:306] Processing Done. May 14 23:53:34.559261 update_engine[1726]: I20250514 23:53:34.559240 1726 update_attempter.cc:310] Error event sent. May 14 23:53:34.559261 update_engine[1726]: I20250514 23:53:34.559250 1726 update_check_scheduler.cc:74] Next update check in 46m38s May 14 23:53:34.559845 locksmithd[1768]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 23:53:35.831939 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:36104.service - OpenSSH per-connection server daemon (10.200.16.10:36104). May 14 23:53:36.288029 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 36104 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:36.289465 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:36.294755 systemd-logind[1717]: New session 22 of user core. May 14 23:53:36.301958 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:53:36.673087 sshd[4946]: Connection closed by 10.200.16.10 port 36104 May 14 23:53:36.673658 sshd-session[4944]: pam_unix(sshd:session): session closed for user core May 14 23:53:36.677029 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:36104.service: Deactivated successfully. May 14 23:53:36.678852 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:53:36.679614 systemd-logind[1717]: Session 22 logged out. Waiting for processes to exit. May 14 23:53:36.680521 systemd-logind[1717]: Removed session 22. May 14 23:53:41.761894 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:52818.service - OpenSSH per-connection server daemon (10.200.16.10:52818). May 14 23:53:42.259346 sshd[4957]: Accepted publickey for core from 10.200.16.10 port 52818 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:42.260768 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:42.266199 systemd-logind[1717]: New session 23 of user core. May 14 23:53:42.273065 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:53:42.677371 sshd[4959]: Connection closed by 10.200.16.10 port 52818 May 14 23:53:42.677915 sshd-session[4957]: pam_unix(sshd:session): session closed for user core May 14 23:53:42.681387 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:52818.service: Deactivated successfully. May 14 23:53:42.683549 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:53:42.684353 systemd-logind[1717]: Session 23 logged out. Waiting for processes to exit. May 14 23:53:42.685379 systemd-logind[1717]: Removed session 23. May 14 23:53:47.772086 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:52826.service - OpenSSH per-connection server daemon (10.200.16.10:52826). May 14 23:53:48.257900 sshd[4972]: Accepted publickey for core from 10.200.16.10 port 52826 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:48.259199 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:48.264471 systemd-logind[1717]: New session 24 of user core. May 14 23:53:48.268850 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:53:48.673527 sshd[4974]: Connection closed by 10.200.16.10 port 52826 May 14 23:53:48.674139 sshd-session[4972]: pam_unix(sshd:session): session closed for user core May 14 23:53:48.677468 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:52826.service: Deactivated successfully. May 14 23:53:48.679351 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:53:48.680402 systemd-logind[1717]: Session 24 logged out. Waiting for processes to exit. May 14 23:53:48.681487 systemd-logind[1717]: Removed session 24. May 14 23:53:48.761400 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:53112.service - OpenSSH per-connection server daemon (10.200.16.10:53112). May 14 23:53:49.246408 sshd[4986]: Accepted publickey for core from 10.200.16.10 port 53112 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:49.247716 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:49.252525 systemd-logind[1717]: New session 25 of user core. May 14 23:53:49.261851 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:53:51.632529 containerd[1746]: time="2025-05-14T23:53:51.632356209Z" level=info msg="StopContainer for \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\" with timeout 30 (s)" May 14 23:53:51.636071 containerd[1746]: time="2025-05-14T23:53:51.633030527Z" level=info msg="Stop container \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\" with signal terminated" May 14 23:53:51.652664 containerd[1746]: time="2025-05-14T23:53:51.652615855Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:53:51.655824 systemd[1]: cri-containerd-91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0.scope: Deactivated successfully. May 14 23:53:51.664439 containerd[1746]: time="2025-05-14T23:53:51.664301996Z" level=info msg="StopContainer for \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\" with timeout 2 (s)" May 14 23:53:51.664833 containerd[1746]: time="2025-05-14T23:53:51.664756755Z" level=info msg="Stop container \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\" with signal terminated" May 14 23:53:51.673299 systemd-networkd[1343]: lxc_health: Link DOWN May 14 23:53:51.673305 systemd-networkd[1343]: lxc_health: Lost carrier May 14 23:53:51.689576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0-rootfs.mount: Deactivated successfully. May 14 23:53:51.691661 systemd[1]: cri-containerd-679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b.scope: Deactivated successfully. May 14 23:53:51.692235 systemd[1]: cri-containerd-679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b.scope: Consumed 6.229s CPU time, 124.9M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:53:51.712601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b-rootfs.mount: Deactivated successfully. May 14 23:53:51.749793 containerd[1746]: time="2025-05-14T23:53:51.749689214Z" level=info msg="shim disconnected" id=679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b namespace=k8s.io May 14 23:53:51.749793 containerd[1746]: time="2025-05-14T23:53:51.749786374Z" level=warning msg="cleaning up after shim disconnected" id=679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b namespace=k8s.io May 14 23:53:51.750277 containerd[1746]: time="2025-05-14T23:53:51.749809054Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:51.750277 containerd[1746]: time="2025-05-14T23:53:51.749967054Z" level=info msg="shim disconnected" id=91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0 namespace=k8s.io May 14 23:53:51.750277 containerd[1746]: time="2025-05-14T23:53:51.749993654Z" level=warning msg="cleaning up after shim disconnected" id=91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0 namespace=k8s.io May 14 23:53:51.750277 containerd[1746]: time="2025-05-14T23:53:51.750003854Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:51.770973 containerd[1746]: time="2025-05-14T23:53:51.770925219Z" level=info msg="StopContainer for \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\" returns successfully" May 14 23:53:51.771595 containerd[1746]: time="2025-05-14T23:53:51.771570138Z" level=info msg="StopPodSandbox for \"80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68\"" May 14 23:53:51.771664 containerd[1746]: time="2025-05-14T23:53:51.771605338Z" level=info msg="Container to stop \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:53:51.773644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68-shm.mount: Deactivated successfully. May 14 23:53:51.774618 containerd[1746]: time="2025-05-14T23:53:51.774306893Z" level=info msg="StopContainer for \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\" returns successfully" May 14 23:53:51.774953 containerd[1746]: time="2025-05-14T23:53:51.774881653Z" level=info msg="StopPodSandbox for \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\"" May 14 23:53:51.774953 containerd[1746]: time="2025-05-14T23:53:51.774910772Z" level=info msg="Container to stop \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:53:51.774953 containerd[1746]: time="2025-05-14T23:53:51.774920892Z" level=info msg="Container to stop \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:53:51.774953 containerd[1746]: time="2025-05-14T23:53:51.774932252Z" level=info msg="Container to stop \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:53:51.775309 containerd[1746]: time="2025-05-14T23:53:51.774940172Z" level=info msg="Container to stop \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:53:51.775309 containerd[1746]: time="2025-05-14T23:53:51.775175772Z" level=info msg="Container to stop \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:53:51.778718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3-shm.mount: Deactivated successfully. May 14 23:53:51.783349 systemd[1]: cri-containerd-321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3.scope: Deactivated successfully. May 14 23:53:51.786227 systemd[1]: cri-containerd-80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68.scope: Deactivated successfully. May 14 23:53:51.823134 containerd[1746]: time="2025-05-14T23:53:51.823074133Z" level=info msg="shim disconnected" id=80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68 namespace=k8s.io May 14 23:53:51.823574 containerd[1746]: time="2025-05-14T23:53:51.823370252Z" level=warning msg="cleaning up after shim disconnected" id=80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68 namespace=k8s.io May 14 23:53:51.823986 containerd[1746]: time="2025-05-14T23:53:51.823943931Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:51.824818 containerd[1746]: time="2025-05-14T23:53:51.823326892Z" level=info msg="shim disconnected" id=321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3 namespace=k8s.io May 14 23:53:51.824928 containerd[1746]: time="2025-05-14T23:53:51.824913130Z" level=warning msg="cleaning up after shim disconnected" id=321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3 namespace=k8s.io May 14 23:53:51.824991 containerd[1746]: time="2025-05-14T23:53:51.824978210Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:51.838346 containerd[1746]: time="2025-05-14T23:53:51.838295668Z" level=info msg="TearDown network for sandbox \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" successfully" May 14 23:53:51.838346 containerd[1746]: time="2025-05-14T23:53:51.838331827Z" level=info msg="StopPodSandbox for \"321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3\" returns successfully" May 14 23:53:51.840661 containerd[1746]: time="2025-05-14T23:53:51.840426024Z" level=info msg="TearDown network for sandbox \"80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68\" successfully" May 14 23:53:51.840661 containerd[1746]: time="2025-05-14T23:53:51.840455504Z" level=info msg="StopPodSandbox for \"80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68\" returns successfully" May 14 23:53:51.949808 kubelet[3377]: I0514 23:53:51.949150 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-lib-modules\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.949808 kubelet[3377]: I0514 23:53:51.949196 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djgxf\" (UniqueName: \"kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-kube-api-access-djgxf\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.949808 kubelet[3377]: I0514 23:53:51.949213 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-cgroup\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.949808 kubelet[3377]: I0514 23:53:51.949235 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-bpf-maps\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.949808 kubelet[3377]: I0514 23:53:51.949254 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxx8v\" (UniqueName: \"kubernetes.io/projected/a30f8932-940d-41e2-a7ce-134b00bf58ee-kube-api-access-pxx8v\") pod \"a30f8932-940d-41e2-a7ce-134b00bf58ee\" (UID: \"a30f8932-940d-41e2-a7ce-134b00bf58ee\") " May 14 23:53:51.949808 kubelet[3377]: I0514 23:53:51.949268 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-net\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950266 kubelet[3377]: I0514 23:53:51.949283 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-etc-cni-netd\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950266 kubelet[3377]: I0514 23:53:51.949300 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da6ff1a1-b772-4add-93cb-3d95d098d2dc-clustermesh-secrets\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950266 kubelet[3377]: I0514 23:53:51.949317 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-config-path\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950266 kubelet[3377]: I0514 23:53:51.949330 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hostproc\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950266 kubelet[3377]: I0514 23:53:51.949343 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-run\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950266 kubelet[3377]: I0514 23:53:51.949370 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-xtables-lock\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950397 kubelet[3377]: I0514 23:53:51.949388 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a30f8932-940d-41e2-a7ce-134b00bf58ee-cilium-config-path\") pod \"a30f8932-940d-41e2-a7ce-134b00bf58ee\" (UID: \"a30f8932-940d-41e2-a7ce-134b00bf58ee\") " May 14 23:53:51.950397 kubelet[3377]: I0514 23:53:51.949403 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cni-path\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950397 kubelet[3377]: I0514 23:53:51.949416 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-kernel\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.950397 kubelet[3377]: I0514 23:53:51.949432 3377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hubble-tls\") pod \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\" (UID: \"da6ff1a1-b772-4add-93cb-3d95d098d2dc\") " May 14 23:53:51.953773 kubelet[3377]: I0514 23:53:51.952666 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6ff1a1-b772-4add-93cb-3d95d098d2dc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 23:53:51.953773 kubelet[3377]: I0514 23:53:51.952762 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.954014 kubelet[3377]: I0514 23:53:51.953964 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:53:51.954123 kubelet[3377]: I0514 23:53:51.954110 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hostproc" (OuterVolumeSpecName: "hostproc") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.954217 kubelet[3377]: I0514 23:53:51.954203 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.954301 kubelet[3377]: I0514 23:53:51.954263 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.954365 kubelet[3377]: I0514 23:53:51.954279 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.954438 kubelet[3377]: I0514 23:53:51.954418 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.954758 kubelet[3377]: I0514 23:53:51.954736 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.954899 kubelet[3377]: I0514 23:53:51.954882 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.955437 kubelet[3377]: I0514 23:53:51.955396 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cni-path" (OuterVolumeSpecName: "cni-path") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.955564 kubelet[3377]: I0514 23:53:51.955546 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:53:51.956064 kubelet[3377]: I0514 23:53:51.956035 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-kube-api-access-djgxf" (OuterVolumeSpecName: "kube-api-access-djgxf") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "kube-api-access-djgxf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:53:51.956444 kubelet[3377]: I0514 23:53:51.956421 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30f8932-940d-41e2-a7ce-134b00bf58ee-kube-api-access-pxx8v" (OuterVolumeSpecName: "kube-api-access-pxx8v") pod "a30f8932-940d-41e2-a7ce-134b00bf58ee" (UID: "a30f8932-940d-41e2-a7ce-134b00bf58ee"). InnerVolumeSpecName "kube-api-access-pxx8v". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:53:51.959350 kubelet[3377]: I0514 23:53:51.959320 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "da6ff1a1-b772-4add-93cb-3d95d098d2dc" (UID: "da6ff1a1-b772-4add-93cb-3d95d098d2dc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:53:51.959648 kubelet[3377]: I0514 23:53:51.959620 3377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30f8932-940d-41e2-a7ce-134b00bf58ee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a30f8932-940d-41e2-a7ce-134b00bf58ee" (UID: "a30f8932-940d-41e2-a7ce-134b00bf58ee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049732 3377 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-net\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049764 3377 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-etc-cni-netd\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049773 3377 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da6ff1a1-b772-4add-93cb-3d95d098d2dc-clustermesh-secrets\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049783 3377 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-config-path\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049792 3377 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hostproc\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049800 3377 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-run\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049812 3377 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-xtables-lock\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.049930 kubelet[3377]: I0514 23:53:52.049822 3377 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a30f8932-940d-41e2-a7ce-134b00bf58ee-cilium-config-path\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049829 3377 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cni-path\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049837 3377 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-host-proc-sys-kernel\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049869 3377 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-hubble-tls\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049878 3377 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-lib-modules\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049886 3377 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-djgxf\" (UniqueName: \"kubernetes.io/projected/da6ff1a1-b772-4add-93cb-3d95d098d2dc-kube-api-access-djgxf\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049895 3377 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-cilium-cgroup\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049903 3377 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da6ff1a1-b772-4add-93cb-3d95d098d2dc-bpf-maps\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.050203 kubelet[3377]: I0514 23:53:52.049911 3377 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pxx8v\" (UniqueName: \"kubernetes.io/projected/a30f8932-940d-41e2-a7ce-134b00bf58ee-kube-api-access-pxx8v\") on node \"ci-4230.1.1-n-00beb67e77\" DevicePath \"\"" May 14 23:53:52.079307 kubelet[3377]: I0514 23:53:52.078801 3377 scope.go:117] "RemoveContainer" containerID="91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0" May 14 23:53:52.085725 containerd[1746]: time="2025-05-14T23:53:52.084913219Z" level=info msg="RemoveContainer for \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\"" May 14 23:53:52.087781 systemd[1]: Removed slice kubepods-besteffort-poda30f8932_940d_41e2_a7ce_134b00bf58ee.slice - libcontainer container kubepods-besteffort-poda30f8932_940d_41e2_a7ce_134b00bf58ee.slice. May 14 23:53:52.094515 systemd[1]: Removed slice kubepods-burstable-podda6ff1a1_b772_4add_93cb_3d95d098d2dc.slice - libcontainer container kubepods-burstable-podda6ff1a1_b772_4add_93cb_3d95d098d2dc.slice. May 14 23:53:52.094791 systemd[1]: kubepods-burstable-podda6ff1a1_b772_4add_93cb_3d95d098d2dc.slice: Consumed 6.296s CPU time, 125.3M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:53:52.099942 containerd[1746]: time="2025-05-14T23:53:52.099897314Z" level=info msg="RemoveContainer for \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\" returns successfully" May 14 23:53:52.100353 kubelet[3377]: I0514 23:53:52.100174 3377 scope.go:117] "RemoveContainer" containerID="91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0" May 14 23:53:52.100847 containerd[1746]: time="2025-05-14T23:53:52.100799273Z" level=error msg="ContainerStatus for \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\": not found" May 14 23:53:52.100967 kubelet[3377]: E0514 23:53:52.100937 3377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\": not found" containerID="91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0" May 14 23:53:52.101052 kubelet[3377]: I0514 23:53:52.100971 3377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0"} err="failed to get container status \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\": rpc error: code = NotFound desc = an error occurred when try to find container \"91584bd211b696113a534d1a2afec8ef1fb97ef59f166b41915b1d90ee1abaa0\": not found" May 14 23:53:52.101052 kubelet[3377]: I0514 23:53:52.101049 3377 scope.go:117] "RemoveContainer" containerID="679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b" May 14 23:53:52.102145 containerd[1746]: time="2025-05-14T23:53:52.102117231Z" level=info msg="RemoveContainer for \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\"" May 14 23:53:52.111130 containerd[1746]: time="2025-05-14T23:53:52.111086176Z" level=info msg="RemoveContainer for \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\" returns successfully" May 14 23:53:52.111519 kubelet[3377]: I0514 23:53:52.111402 3377 scope.go:117] "RemoveContainer" containerID="72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac" May 14 23:53:52.116029 containerd[1746]: time="2025-05-14T23:53:52.113926131Z" level=info msg="RemoveContainer for \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\"" May 14 23:53:52.124503 containerd[1746]: time="2025-05-14T23:53:52.124464554Z" level=info msg="RemoveContainer for \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\" returns successfully" May 14 23:53:52.125011 kubelet[3377]: I0514 23:53:52.124879 3377 scope.go:117] "RemoveContainer" containerID="52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132" May 14 23:53:52.126158 containerd[1746]: time="2025-05-14T23:53:52.126125711Z" level=info msg="RemoveContainer for \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\"" May 14 23:53:52.133742 containerd[1746]: time="2025-05-14T23:53:52.133683178Z" level=info msg="RemoveContainer for \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\" returns successfully" May 14 23:53:52.133978 kubelet[3377]: I0514 23:53:52.133926 3377 scope.go:117] "RemoveContainer" containerID="70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945" May 14 23:53:52.135123 containerd[1746]: time="2025-05-14T23:53:52.135071376Z" level=info msg="RemoveContainer for \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\"" May 14 23:53:52.144412 containerd[1746]: time="2025-05-14T23:53:52.144378881Z" level=info msg="RemoveContainer for \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\" returns successfully" May 14 23:53:52.144740 kubelet[3377]: I0514 23:53:52.144630 3377 scope.go:117] "RemoveContainer" containerID="d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f" May 14 23:53:52.145962 containerd[1746]: time="2025-05-14T23:53:52.145930678Z" level=info msg="RemoveContainer for \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\"" May 14 23:53:52.154300 containerd[1746]: time="2025-05-14T23:53:52.154259904Z" level=info msg="RemoveContainer for \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\" returns successfully" May 14 23:53:52.154542 kubelet[3377]: I0514 23:53:52.154498 3377 scope.go:117] "RemoveContainer" containerID="679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b" May 14 23:53:52.154827 containerd[1746]: time="2025-05-14T23:53:52.154747183Z" level=error msg="ContainerStatus for \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\": not found" May 14 23:53:52.155085 kubelet[3377]: E0514 23:53:52.154968 3377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\": not found" containerID="679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b" May 14 23:53:52.155085 kubelet[3377]: I0514 23:53:52.154995 3377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b"} err="failed to get container status \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\": rpc error: code = NotFound desc = an error occurred when try to find container \"679fbb91f0f8eccbba9ad256be73d1b8c2662ddc8405a0ddf70241c25fcc856b\": not found" May 14 23:53:52.155085 kubelet[3377]: I0514 23:53:52.155015 3377 scope.go:117] "RemoveContainer" containerID="72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac" May 14 23:53:52.155380 containerd[1746]: time="2025-05-14T23:53:52.155312903Z" level=error msg="ContainerStatus for \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\": not found" May 14 23:53:52.155516 kubelet[3377]: E0514 23:53:52.155471 3377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\": not found" containerID="72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac" May 14 23:53:52.155553 kubelet[3377]: I0514 23:53:52.155519 3377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac"} err="failed to get container status \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\": rpc error: code = NotFound desc = an error occurred when try to find container \"72925faa060381229367c4432b817fb732f0be7cd545e7dfeee8b48591015cac\": not found" May 14 23:53:52.155553 kubelet[3377]: I0514 23:53:52.155539 3377 scope.go:117] "RemoveContainer" containerID="52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132" May 14 23:53:52.155803 containerd[1746]: time="2025-05-14T23:53:52.155767982Z" level=error msg="ContainerStatus for \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\": not found" May 14 23:53:52.155917 kubelet[3377]: E0514 23:53:52.155891 3377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\": not found" containerID="52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132" May 14 23:53:52.155953 kubelet[3377]: I0514 23:53:52.155921 3377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132"} err="failed to get container status \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\": rpc error: code = NotFound desc = an error occurred when try to find container \"52f5e7c03bf72c041d3de820a7f9a835e5e4645d475c7fde1688569e5f71a132\": not found" May 14 23:53:52.155953 kubelet[3377]: I0514 23:53:52.155937 3377 scope.go:117] "RemoveContainer" containerID="70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945" May 14 23:53:52.156171 containerd[1746]: time="2025-05-14T23:53:52.156136541Z" level=error msg="ContainerStatus for \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\": not found" May 14 23:53:52.156287 kubelet[3377]: E0514 23:53:52.156263 3377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\": not found" containerID="70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945" May 14 23:53:52.156380 kubelet[3377]: I0514 23:53:52.156358 3377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945"} err="failed to get container status \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\": rpc error: code = NotFound desc = an error occurred when try to find container \"70babcdf6c8f0ac7a170de8606961932e815a98c6720e0dd1360dfb8a470a945\": not found" May 14 23:53:52.156461 kubelet[3377]: I0514 23:53:52.156380 3377 scope.go:117] "RemoveContainer" containerID="d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f" May 14 23:53:52.156621 containerd[1746]: time="2025-05-14T23:53:52.156588620Z" level=error msg="ContainerStatus for \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\": not found" May 14 23:53:52.156747 kubelet[3377]: E0514 23:53:52.156722 3377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\": not found" containerID="d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f" May 14 23:53:52.156800 kubelet[3377]: I0514 23:53:52.156747 3377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f"} err="failed to get container status \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5fce99c22e40d7652bb6690ab867dc74e98ea2afc09963c42d74f74812f9c8f\": not found" May 14 23:53:52.629207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80da24227f8123dbc5c97a6edbfd1338fbfad3138bc686183420322863abfc68-rootfs.mount: Deactivated successfully. May 14 23:53:52.629308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-321ac12536c334d1e0760efb6c00c4e7f3711fe6b7ed4d126261de1148560ca3-rootfs.mount: Deactivated successfully. May 14 23:53:52.629357 systemd[1]: var-lib-kubelet-pods-a30f8932\x2d940d\x2d41e2\x2da7ce\x2d134b00bf58ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxx8v.mount: Deactivated successfully. May 14 23:53:52.629413 systemd[1]: var-lib-kubelet-pods-da6ff1a1\x2db772\x2d4add\x2d93cb\x2d3d95d098d2dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjgxf.mount: Deactivated successfully. May 14 23:53:52.629462 systemd[1]: var-lib-kubelet-pods-da6ff1a1\x2db772\x2d4add\x2d93cb\x2d3d95d098d2dc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 23:53:52.629512 systemd[1]: var-lib-kubelet-pods-da6ff1a1\x2db772\x2d4add\x2d93cb\x2d3d95d098d2dc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 23:53:52.785765 kubelet[3377]: E0514 23:53:52.785718 3377 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:53:53.628821 sshd[4988]: Connection closed by 10.200.16.10 port 53112 May 14 23:53:53.629515 sshd-session[4986]: pam_unix(sshd:session): session closed for user core May 14 23:53:53.632341 systemd-logind[1717]: Session 25 logged out. Waiting for processes to exit. May 14 23:53:53.633263 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:53112.service: Deactivated successfully. May 14 23:53:53.635203 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:53:53.635409 systemd[1]: session-25.scope: Consumed 1.456s CPU time, 23.6M memory peak. May 14 23:53:53.636864 systemd-logind[1717]: Removed session 25. May 14 23:53:53.681509 kubelet[3377]: I0514 23:53:53.681470 3377 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30f8932-940d-41e2-a7ce-134b00bf58ee" path="/var/lib/kubelet/pods/a30f8932-940d-41e2-a7ce-134b00bf58ee/volumes" May 14 23:53:53.681942 kubelet[3377]: I0514 23:53:53.681885 3377 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6ff1a1-b772-4add-93cb-3d95d098d2dc" path="/var/lib/kubelet/pods/da6ff1a1-b772-4add-93cb-3d95d098d2dc/volumes" May 14 23:53:53.717213 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:53124.service - OpenSSH per-connection server daemon (10.200.16.10:53124). May 14 23:53:54.173259 sshd[5150]: Accepted publickey for core from 10.200.16.10 port 53124 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:54.175014 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:54.180794 systemd-logind[1717]: New session 26 of user core. May 14 23:53:54.187854 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:53:55.640364 kubelet[3377]: E0514 23:53:55.639184 3377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a30f8932-940d-41e2-a7ce-134b00bf58ee" containerName="cilium-operator" May 14 23:53:55.640364 kubelet[3377]: E0514 23:53:55.639212 3377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da6ff1a1-b772-4add-93cb-3d95d098d2dc" containerName="cilium-agent" May 14 23:53:55.640364 kubelet[3377]: E0514 23:53:55.639220 3377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da6ff1a1-b772-4add-93cb-3d95d098d2dc" containerName="mount-cgroup" May 14 23:53:55.640364 kubelet[3377]: E0514 23:53:55.639226 3377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da6ff1a1-b772-4add-93cb-3d95d098d2dc" containerName="apply-sysctl-overwrites" May 14 23:53:55.640364 kubelet[3377]: E0514 23:53:55.639231 3377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da6ff1a1-b772-4add-93cb-3d95d098d2dc" containerName="mount-bpf-fs" May 14 23:53:55.640364 kubelet[3377]: E0514 23:53:55.639236 3377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da6ff1a1-b772-4add-93cb-3d95d098d2dc" containerName="clean-cilium-state" May 14 23:53:55.640364 kubelet[3377]: I0514 23:53:55.639262 3377 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6ff1a1-b772-4add-93cb-3d95d098d2dc" containerName="cilium-agent" May 14 23:53:55.640364 kubelet[3377]: I0514 23:53:55.639269 3377 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30f8932-940d-41e2-a7ce-134b00bf58ee" containerName="cilium-operator" May 14 23:53:55.648649 systemd[1]: Created slice kubepods-burstable-pod60668207_c64e_4da4_8631_b501b5e4826a.slice - libcontainer container kubepods-burstable-pod60668207_c64e_4da4_8631_b501b5e4826a.slice. May 14 23:53:55.703114 sshd[5153]: Connection closed by 10.200.16.10 port 53124 May 14 23:53:55.704204 sshd-session[5150]: pam_unix(sshd:session): session closed for user core May 14 23:53:55.708140 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:53124.service: Deactivated successfully. May 14 23:53:55.713941 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:53:55.714140 systemd[1]: session-26.scope: Consumed 1.105s CPU time, 23.8M memory peak. May 14 23:53:55.714800 systemd-logind[1717]: Session 26 logged out. Waiting for processes to exit. May 14 23:53:55.716533 systemd-logind[1717]: Removed session 26. May 14 23:53:55.769952 kubelet[3377]: I0514 23:53:55.769854 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-bpf-maps\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.769952 kubelet[3377]: I0514 23:53:55.769895 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/60668207-c64e-4da4-8631-b501b5e4826a-cilium-ipsec-secrets\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.769952 kubelet[3377]: I0514 23:53:55.769915 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-hostproc\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.769952 kubelet[3377]: I0514 23:53:55.769931 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60668207-c64e-4da4-8631-b501b5e4826a-clustermesh-secrets\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770256 kubelet[3377]: I0514 23:53:55.769963 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8szw\" (UniqueName: \"kubernetes.io/projected/60668207-c64e-4da4-8631-b501b5e4826a-kube-api-access-j8szw\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770256 kubelet[3377]: I0514 23:53:55.769999 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-etc-cni-netd\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770256 kubelet[3377]: I0514 23:53:55.770033 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60668207-c64e-4da4-8631-b501b5e4826a-hubble-tls\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770256 kubelet[3377]: I0514 23:53:55.770050 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-lib-modules\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770256 kubelet[3377]: I0514 23:53:55.770067 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-host-proc-sys-kernel\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770256 kubelet[3377]: I0514 23:53:55.770088 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-cilium-run\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770387 kubelet[3377]: I0514 23:53:55.770102 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-cni-path\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770387 kubelet[3377]: I0514 23:53:55.770117 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-cilium-cgroup\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770387 kubelet[3377]: I0514 23:53:55.770134 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60668207-c64e-4da4-8631-b501b5e4826a-cilium-config-path\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770387 kubelet[3377]: I0514 23:53:55.770148 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-xtables-lock\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.770387 kubelet[3377]: I0514 23:53:55.770163 3377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60668207-c64e-4da4-8631-b501b5e4826a-host-proc-sys-net\") pod \"cilium-rhhx7\" (UID: \"60668207-c64e-4da4-8631-b501b5e4826a\") " pod="kube-system/cilium-rhhx7" May 14 23:53:55.797927 systemd[1]: Started sshd@24-10.200.20.38:22-10.200.16.10:53134.service - OpenSSH per-connection server daemon (10.200.16.10:53134). May 14 23:53:55.953016 containerd[1746]: time="2025-05-14T23:53:55.952874058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhhx7,Uid:60668207-c64e-4da4-8631-b501b5e4826a,Namespace:kube-system,Attempt:0,}" May 14 23:53:55.988169 containerd[1746]: time="2025-05-14T23:53:55.987770527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:55.988169 containerd[1746]: time="2025-05-14T23:53:55.987969326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:55.988169 containerd[1746]: time="2025-05-14T23:53:55.987998166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:55.988169 containerd[1746]: time="2025-05-14T23:53:55.988102766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:56.005885 systemd[1]: Started cri-containerd-9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f.scope - libcontainer container 9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f. May 14 23:53:56.025995 containerd[1746]: time="2025-05-14T23:53:56.025940910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhhx7,Uid:60668207-c64e-4da4-8631-b501b5e4826a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\"" May 14 23:53:56.031126 containerd[1746]: time="2025-05-14T23:53:56.029984704Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:53:56.064638 containerd[1746]: time="2025-05-14T23:53:56.064586814Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566\"" May 14 23:53:56.066576 containerd[1746]: time="2025-05-14T23:53:56.066543651Z" level=info msg="StartContainer for \"057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566\"" May 14 23:53:56.091895 systemd[1]: Started cri-containerd-057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566.scope - libcontainer container 057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566. May 14 23:53:56.119629 containerd[1746]: time="2025-05-14T23:53:56.119528653Z" level=info msg="StartContainer for \"057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566\" returns successfully" May 14 23:53:56.130065 systemd[1]: cri-containerd-057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566.scope: Deactivated successfully. May 14 23:53:56.211145 containerd[1746]: time="2025-05-14T23:53:56.210985598Z" level=info msg="shim disconnected" id=057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566 namespace=k8s.io May 14 23:53:56.211145 containerd[1746]: time="2025-05-14T23:53:56.211063478Z" level=warning msg="cleaning up after shim disconnected" id=057da6c15c9ffc08dd43c8a96197ca430e97393259dba0f46d286a45cceea566 namespace=k8s.io May 14 23:53:56.211145 containerd[1746]: time="2025-05-14T23:53:56.211072838Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:56.279126 sshd[5165]: Accepted publickey for core from 10.200.16.10 port 53134 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:56.280435 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:56.284994 systemd-logind[1717]: New session 27 of user core. May 14 23:53:56.290885 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:53:56.620218 sshd[5274]: Connection closed by 10.200.16.10 port 53134 May 14 23:53:56.621043 sshd-session[5165]: pam_unix(sshd:session): session closed for user core May 14 23:53:56.624550 systemd[1]: sshd@24-10.200.20.38:22-10.200.16.10:53134.service: Deactivated successfully. May 14 23:53:56.626642 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:53:56.627598 systemd-logind[1717]: Session 27 logged out. Waiting for processes to exit. May 14 23:53:56.628508 systemd-logind[1717]: Removed session 27. May 14 23:53:56.710983 systemd[1]: Started sshd@25-10.200.20.38:22-10.200.16.10:53142.service - OpenSSH per-connection server daemon (10.200.16.10:53142). May 14 23:53:57.107402 containerd[1746]: time="2025-05-14T23:53:57.107195601Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:53:57.144162 containerd[1746]: time="2025-05-14T23:53:57.143815107Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e\"" May 14 23:53:57.145737 containerd[1746]: time="2025-05-14T23:53:57.145589584Z" level=info msg="StartContainer for \"1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e\"" May 14 23:53:57.160634 sshd[5282]: Accepted publickey for core from 10.200.16.10 port 53142 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:57.163367 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:57.171830 systemd-logind[1717]: New session 28 of user core. May 14 23:53:57.179063 systemd[1]: Started cri-containerd-1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e.scope - libcontainer container 1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e. May 14 23:53:57.180373 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 23:53:57.216994 containerd[1746]: time="2025-05-14T23:53:57.216893799Z" level=info msg="StartContainer for \"1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e\" returns successfully" May 14 23:53:57.219437 systemd[1]: cri-containerd-1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e.scope: Deactivated successfully. May 14 23:53:57.254219 containerd[1746]: time="2025-05-14T23:53:57.254128265Z" level=info msg="shim disconnected" id=1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e namespace=k8s.io May 14 23:53:57.254219 containerd[1746]: time="2025-05-14T23:53:57.254196625Z" level=warning msg="cleaning up after shim disconnected" id=1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e namespace=k8s.io May 14 23:53:57.254219 containerd[1746]: time="2025-05-14T23:53:57.254207825Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:57.787225 kubelet[3377]: E0514 23:53:57.787181 3377 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:53:57.877395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dc38d94f2ba18483f227578b232b5b0fea6f73015c48b1033c35d3411f0809e-rootfs.mount: Deactivated successfully. May 14 23:53:58.113709 containerd[1746]: time="2025-05-14T23:53:58.113123082Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:53:58.160359 containerd[1746]: time="2025-05-14T23:53:58.160308292Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd\"" May 14 23:53:58.162127 containerd[1746]: time="2025-05-14T23:53:58.160998971Z" level=info msg="StartContainer for \"0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd\"" May 14 23:53:58.188908 systemd[1]: Started cri-containerd-0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd.scope - libcontainer container 0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd. May 14 23:53:58.219347 systemd[1]: cri-containerd-0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd.scope: Deactivated successfully. May 14 23:53:58.222251 containerd[1746]: time="2025-05-14T23:53:58.222034402Z" level=info msg="StartContainer for \"0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd\" returns successfully" May 14 23:53:58.260268 containerd[1746]: time="2025-05-14T23:53:58.260210706Z" level=info msg="shim disconnected" id=0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd namespace=k8s.io May 14 23:53:58.260850 containerd[1746]: time="2025-05-14T23:53:58.260582185Z" level=warning msg="cleaning up after shim disconnected" id=0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd namespace=k8s.io May 14 23:53:58.260850 containerd[1746]: time="2025-05-14T23:53:58.260599865Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:58.877301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e9080e16e14240b0b78562e2b6c0302b5894484d7ada38140aecbe2ef0959dd-rootfs.mount: Deactivated successfully. May 14 23:53:59.116348 containerd[1746]: time="2025-05-14T23:53:59.116310447Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:53:59.150342 containerd[1746]: time="2025-05-14T23:53:59.150211637Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be\"" May 14 23:53:59.152889 containerd[1746]: time="2025-05-14T23:53:59.152262514Z" level=info msg="StartContainer for \"12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be\"" May 14 23:53:59.181930 systemd[1]: Started cri-containerd-12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be.scope - libcontainer container 12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be. May 14 23:53:59.204291 systemd[1]: cri-containerd-12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be.scope: Deactivated successfully. May 14 23:53:59.208948 containerd[1746]: time="2025-05-14T23:53:59.208909631Z" level=info msg="StartContainer for \"12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be\" returns successfully" May 14 23:53:59.240312 containerd[1746]: time="2025-05-14T23:53:59.240176865Z" level=info msg="shim disconnected" id=12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be namespace=k8s.io May 14 23:53:59.240312 containerd[1746]: time="2025-05-14T23:53:59.240229225Z" level=warning msg="cleaning up after shim disconnected" id=12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be namespace=k8s.io May 14 23:53:59.240312 containerd[1746]: time="2025-05-14T23:53:59.240236825Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:59.877299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12397b425677e07a488757a9988067dbd6099d3d205e897e97ddeecf11ead2be-rootfs.mount: Deactivated successfully. May 14 23:54:00.124310 containerd[1746]: time="2025-05-14T23:54:00.124195085Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:54:00.166176 containerd[1746]: time="2025-05-14T23:54:00.166068384Z" level=info msg="CreateContainer within sandbox \"9f59b71941e12145dc9a27b63e3b1a83afca9d4a28c1660bbe8b4c0bd741ff4f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7f204de6126a9a78a9a969c851215c7841fc644be6a6818f1aff9e9f1ee75e3c\"" May 14 23:54:00.167372 containerd[1746]: time="2025-05-14T23:54:00.167347662Z" level=info msg="StartContainer for \"7f204de6126a9a78a9a969c851215c7841fc644be6a6818f1aff9e9f1ee75e3c\"" May 14 23:54:00.199873 systemd[1]: Started cri-containerd-7f204de6126a9a78a9a969c851215c7841fc644be6a6818f1aff9e9f1ee75e3c.scope - libcontainer container 7f204de6126a9a78a9a969c851215c7841fc644be6a6818f1aff9e9f1ee75e3c. May 14 23:54:00.230969 containerd[1746]: time="2025-05-14T23:54:00.230734209Z" level=info msg="StartContainer for \"7f204de6126a9a78a9a969c851215c7841fc644be6a6818f1aff9e9f1ee75e3c\" returns successfully" May 14 23:54:00.777735 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 23:54:01.147996 kubelet[3377]: I0514 23:54:01.147804 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rhhx7" podStartSLOduration=6.14778798 podStartE2EDuration="6.14778798s" podCreationTimestamp="2025-05-14 23:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:01.147594021 +0000 UTC m=+193.580942270" watchObservedRunningTime="2025-05-14 23:54:01.14778798 +0000 UTC m=+193.581136229" May 14 23:54:01.561459 systemd[1]: run-containerd-runc-k8s.io-7f204de6126a9a78a9a969c851215c7841fc644be6a6818f1aff9e9f1ee75e3c-runc.ErJ2Ht.mount: Deactivated successfully. May 14 23:54:01.567825 kubelet[3377]: I0514 23:54:01.567686 3377 setters.go:600] "Node became not ready" node="ci-4230.1.1-n-00beb67e77" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T23:54:01Z","lastTransitionTime":"2025-05-14T23:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 23:54:03.509807 systemd-networkd[1343]: lxc_health: Link UP May 14 23:54:03.530431 systemd-networkd[1343]: lxc_health: Gained carrier May 14 23:54:05.299890 systemd-networkd[1343]: lxc_health: Gained IPv6LL May 14 23:54:05.925119 systemd[1]: run-containerd-runc-k8s.io-7f204de6126a9a78a9a969c851215c7841fc644be6a6818f1aff9e9f1ee75e3c-runc.qZlfUL.mount: Deactivated successfully. May 14 23:54:10.279006 sshd[5308]: Connection closed by 10.200.16.10 port 53142 May 14 23:54:10.279662 sshd-session[5282]: pam_unix(sshd:session): session closed for user core May 14 23:54:10.282534 systemd-logind[1717]: Session 28 logged out. Waiting for processes to exit. May 14 23:54:10.284367 systemd[1]: sshd@25-10.200.20.38:22-10.200.16.10:53142.service: Deactivated successfully. May 14 23:54:10.286592 systemd[1]: session-28.scope: Deactivated successfully. May 14 23:54:10.288216 systemd-logind[1717]: Removed session 28.