May 27 17:00:50.065296 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] May 27 17:00:50.065316 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 15:31:23 -00 2025 May 27 17:00:50.065323 kernel: KASLR enabled May 27 17:00:50.065327 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 27 17:00:50.065331 kernel: printk: legacy bootconsole [pl11] enabled May 27 17:00:50.065335 kernel: efi: EFI v2.7 by EDK II May 27 17:00:50.065340 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead5018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 May 27 17:00:50.065344 kernel: random: crng init done May 27 17:00:50.065348 kernel: secureboot: Secure boot disabled May 27 17:00:50.065352 kernel: ACPI: Early table checksum verification disabled May 27 17:00:50.065355 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 27 17:00:50.065359 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065363 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065368 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 27 17:00:50.065373 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065377 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065381 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065387 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065391 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065395 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065399 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 27 17:00:50.065403 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:00:50.065407 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 27 17:00:50.065411 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 17:00:50.065415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 27 17:00:50.065419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug May 27 17:00:50.065423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug May 27 17:00:50.065428 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 27 17:00:50.065432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 27 17:00:50.065437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 27 17:00:50.065441 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 27 17:00:50.065445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 27 17:00:50.065449 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 27 17:00:50.065453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 27 17:00:50.065457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 27 17:00:50.065461 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 27 17:00:50.065465 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] May 27 17:00:50.065469 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] May 27 17:00:50.065473 kernel: Zone ranges: May 27 17:00:50.065477 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 27 17:00:50.065484 kernel: DMA32 empty May 27 17:00:50.065488 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 27 17:00:50.065493 kernel: Device empty May 27 17:00:50.065497 kernel: Movable zone start for each node May 27 17:00:50.065501 kernel: Early memory node ranges May 27 17:00:50.065506 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 27 17:00:50.065511 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 27 17:00:50.065515 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 27 17:00:50.065519 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 27 17:00:50.065523 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 27 17:00:50.065528 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 27 17:00:50.065532 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 27 17:00:50.065536 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 27 17:00:50.065540 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 27 17:00:50.065545 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 27 17:00:50.065549 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 27 17:00:50.065553 kernel: psci: probing for conduit method from ACPI. May 27 17:00:50.065559 kernel: psci: PSCIv1.1 detected in firmware. May 27 17:00:50.065563 kernel: psci: Using standard PSCI v0.2 function IDs May 27 17:00:50.065567 kernel: psci: MIGRATE_INFO_TYPE not supported. May 27 17:00:50.065571 kernel: psci: SMC Calling Convention v1.4 May 27 17:00:50.065576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 27 17:00:50.065580 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 27 17:00:50.065584 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 17:00:50.065589 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 17:00:50.065593 kernel: pcpu-alloc: [0] 0 [0] 1 May 27 17:00:50.065597 kernel: Detected PIPT I-cache on CPU0 May 27 17:00:50.065602 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) May 27 17:00:50.065607 kernel: CPU features: detected: GIC system register CPU interface May 27 17:00:50.065624 kernel: CPU features: detected: Spectre-v4 May 27 17:00:50.065629 kernel: CPU features: detected: Spectre-BHB May 27 17:00:50.065633 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 17:00:50.065637 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 17:00:50.065642 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 May 27 17:00:50.065646 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 17:00:50.065650 kernel: alternatives: applying boot alternatives May 27 17:00:50.065655 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:00:50.065660 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:00:50.065664 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:00:50.065670 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:00:50.065675 kernel: Fallback order for Node 0: 0 May 27 17:00:50.065679 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 May 27 17:00:50.065683 kernel: Policy zone: Normal May 27 17:00:50.065687 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:00:50.065692 kernel: software IO TLB: area num 2. May 27 17:00:50.065696 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) May 27 17:00:50.065700 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:00:50.065705 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:00:50.065710 kernel: rcu: RCU event tracing is enabled. May 27 17:00:50.065714 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:00:50.065720 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:00:50.065724 kernel: Tracing variant of Tasks RCU enabled. May 27 17:00:50.065728 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:00:50.065733 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:00:50.065737 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:00:50.065741 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:00:50.065746 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 17:00:50.065750 kernel: GICv3: 960 SPIs implemented May 27 17:00:50.065754 kernel: GICv3: 0 Extended SPIs implemented May 27 17:00:50.065759 kernel: Root IRQ handler: gic_handle_irq May 27 17:00:50.065763 kernel: GICv3: GICv3 features: 16 PPIs, RSS May 27 17:00:50.065767 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 May 27 17:00:50.065773 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 27 17:00:50.065777 kernel: ITS: No ITS available, not enabling LPIs May 27 17:00:50.065781 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:00:50.065786 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). May 27 17:00:50.065790 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:00:50.065794 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns May 27 17:00:50.065799 kernel: Console: colour dummy device 80x25 May 27 17:00:50.065803 kernel: printk: legacy console [tty1] enabled May 27 17:00:50.065808 kernel: ACPI: Core revision 20240827 May 27 17:00:50.065813 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) May 27 17:00:50.065818 kernel: pid_max: default: 32768 minimum: 301 May 27 17:00:50.065822 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:00:50.065827 kernel: landlock: Up and running. May 27 17:00:50.065831 kernel: SELinux: Initializing. May 27 17:00:50.065836 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:00:50.065840 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:00:50.065849 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 May 27 17:00:50.065854 kernel: Hyper-V: Host Build 10.0.26100.1254-1-0 May 27 17:00:50.065859 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 27 17:00:50.065864 kernel: rcu: Hierarchical SRCU implementation. May 27 17:00:50.065868 kernel: rcu: Max phase no-delay instances is 400. May 27 17:00:50.065873 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:00:50.065879 kernel: Remapping and enabling EFI services. May 27 17:00:50.065884 kernel: smp: Bringing up secondary CPUs ... May 27 17:00:50.065888 kernel: Detected PIPT I-cache on CPU1 May 27 17:00:50.065893 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 27 17:00:50.065898 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] May 27 17:00:50.065903 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:00:50.065908 kernel: SMP: Total of 2 processors activated. May 27 17:00:50.065913 kernel: CPU: All CPU(s) started at EL1 May 27 17:00:50.065917 kernel: CPU features: detected: 32-bit EL0 Support May 27 17:00:50.065922 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 27 17:00:50.065927 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 17:00:50.065931 kernel: CPU features: detected: Common not Private translations May 27 17:00:50.065936 kernel: CPU features: detected: CRC32 instructions May 27 17:00:50.065941 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) May 27 17:00:50.065946 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 17:00:50.065951 kernel: CPU features: detected: LSE atomic instructions May 27 17:00:50.065956 kernel: CPU features: detected: Privileged Access Never May 27 17:00:50.065960 kernel: CPU features: detected: Speculation barrier (SB) May 27 17:00:50.065965 kernel: CPU features: detected: TLB range maintenance instructions May 27 17:00:50.065969 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 17:00:50.065974 kernel: CPU features: detected: Scalable Vector Extension May 27 17:00:50.065979 kernel: alternatives: applying system-wide alternatives May 27 17:00:50.065983 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 May 27 17:00:50.065989 kernel: SVE: maximum available vector length 16 bytes per vector May 27 17:00:50.065994 kernel: SVE: default vector length 16 bytes per vector May 27 17:00:50.065999 kernel: Memory: 3976112K/4194160K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) May 27 17:00:50.066003 kernel: devtmpfs: initialized May 27 17:00:50.066008 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:00:50.066013 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:00:50.066017 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 17:00:50.066022 kernel: 0 pages in range for non-PLT usage May 27 17:00:50.066027 kernel: 508544 pages in range for PLT usage May 27 17:00:50.066032 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:00:50.066037 kernel: SMBIOS 3.1.0 present. May 27 17:00:50.066042 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 27 17:00:50.066046 kernel: DMI: Memory slots populated: 2/2 May 27 17:00:50.066051 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:00:50.066056 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 17:00:50.066060 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 17:00:50.066065 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 17:00:50.066070 kernel: audit: initializing netlink subsys (disabled) May 27 17:00:50.066076 kernel: audit: type=2000 audit(0.063:1): state=initialized audit_enabled=0 res=1 May 27 17:00:50.066080 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:00:50.066085 kernel: cpuidle: using governor menu May 27 17:00:50.066090 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 17:00:50.066095 kernel: ASID allocator initialised with 32768 entries May 27 17:00:50.066099 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:00:50.066104 kernel: Serial: AMBA PL011 UART driver May 27 17:00:50.066109 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:00:50.066113 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:00:50.066119 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 17:00:50.066123 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 17:00:50.066128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:00:50.066133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:00:50.066138 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 17:00:50.066142 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 17:00:50.066147 kernel: ACPI: Added _OSI(Module Device) May 27 17:00:50.066152 kernel: ACPI: Added _OSI(Processor Device) May 27 17:00:50.066156 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:00:50.066162 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:00:50.066166 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:00:50.066171 kernel: ACPI: Interpreter enabled May 27 17:00:50.066175 kernel: ACPI: Using GIC for interrupt routing May 27 17:00:50.066180 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 27 17:00:50.066185 kernel: printk: legacy console [ttyAMA0] enabled May 27 17:00:50.066190 kernel: printk: legacy bootconsole [pl11] disabled May 27 17:00:50.066194 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 27 17:00:50.066199 kernel: ACPI: CPU0 has been hot-added May 27 17:00:50.066205 kernel: ACPI: CPU1 has been hot-added May 27 17:00:50.066209 kernel: iommu: Default domain type: Translated May 27 17:00:50.066214 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 17:00:50.066219 kernel: efivars: Registered efivars operations May 27 17:00:50.066223 kernel: vgaarb: loaded May 27 17:00:50.066228 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 17:00:50.066233 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:00:50.066237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:00:50.066242 kernel: pnp: PnP ACPI init May 27 17:00:50.066247 kernel: pnp: PnP ACPI: found 0 devices May 27 17:00:50.066252 kernel: NET: Registered PF_INET protocol family May 27 17:00:50.066256 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:00:50.066261 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:00:50.066266 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:00:50.066271 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:00:50.066275 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:00:50.066280 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:00:50.066285 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:00:50.066291 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:00:50.066295 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:00:50.066300 kernel: PCI: CLS 0 bytes, default 64 May 27 17:00:50.066304 kernel: kvm [1]: HYP mode not available May 27 17:00:50.066309 kernel: Initialise system trusted keyrings May 27 17:00:50.066314 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:00:50.066318 kernel: Key type asymmetric registered May 27 17:00:50.066323 kernel: Asymmetric key parser 'x509' registered May 27 17:00:50.066328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 17:00:50.066334 kernel: io scheduler mq-deadline registered May 27 17:00:50.066338 kernel: io scheduler kyber registered May 27 17:00:50.066343 kernel: io scheduler bfq registered May 27 17:00:50.066348 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:00:50.066352 kernel: thunder_xcv, ver 1.0 May 27 17:00:50.066357 kernel: thunder_bgx, ver 1.0 May 27 17:00:50.066362 kernel: nicpf, ver 1.0 May 27 17:00:50.066366 kernel: nicvf, ver 1.0 May 27 17:00:50.066491 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 17:00:50.066546 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T17:00:49 UTC (1748365249) May 27 17:00:50.066552 kernel: efifb: probing for efifb May 27 17:00:50.066557 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 27 17:00:50.066562 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 27 17:00:50.066567 kernel: efifb: scrolling: redraw May 27 17:00:50.066571 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:00:50.066576 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:00:50.066581 kernel: fb0: EFI VGA frame buffer device May 27 17:00:50.066586 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 27 17:00:50.066591 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:00:50.066596 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 17:00:50.066601 kernel: watchdog: NMI not fully supported May 27 17:00:50.066605 kernel: watchdog: Hard watchdog permanently disabled May 27 17:00:50.066619 kernel: NET: Registered PF_INET6 protocol family May 27 17:00:50.066624 kernel: Segment Routing with IPv6 May 27 17:00:50.066628 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:00:50.066633 kernel: NET: Registered PF_PACKET protocol family May 27 17:00:50.066639 kernel: Key type dns_resolver registered May 27 17:00:50.066644 kernel: registered taskstats version 1 May 27 17:00:50.066649 kernel: Loading compiled-in X.509 certificates May 27 17:00:50.066653 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 8e5e45c34fa91568ef1fa3bdfd5a71a43b4c4580' May 27 17:00:50.066658 kernel: Demotion targets for Node 0: null May 27 17:00:50.066663 kernel: Key type .fscrypt registered May 27 17:00:50.066667 kernel: Key type fscrypt-provisioning registered May 27 17:00:50.066672 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:00:50.066677 kernel: ima: Allocated hash algorithm: sha1 May 27 17:00:50.066682 kernel: ima: No architecture policies found May 27 17:00:50.066687 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 17:00:50.066692 kernel: clk: Disabling unused clocks May 27 17:00:50.066696 kernel: PM: genpd: Disabling unused power domains May 27 17:00:50.066701 kernel: Warning: unable to open an initial console. May 27 17:00:50.066706 kernel: Freeing unused kernel memory: 39424K May 27 17:00:50.066711 kernel: Run /init as init process May 27 17:00:50.066715 kernel: with arguments: May 27 17:00:50.066720 kernel: /init May 27 17:00:50.066725 kernel: with environment: May 27 17:00:50.066730 kernel: HOME=/ May 27 17:00:50.066735 kernel: TERM=linux May 27 17:00:50.066739 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:00:50.066746 systemd[1]: Successfully made /usr/ read-only. May 27 17:00:50.066752 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:00:50.066758 systemd[1]: Detected virtualization microsoft. May 27 17:00:50.066764 systemd[1]: Detected architecture arm64. May 27 17:00:50.066769 systemd[1]: Running in initrd. May 27 17:00:50.066774 systemd[1]: No hostname configured, using default hostname. May 27 17:00:50.066779 systemd[1]: Hostname set to . May 27 17:00:50.066784 systemd[1]: Initializing machine ID from random generator. May 27 17:00:50.066789 systemd[1]: Queued start job for default target initrd.target. May 27 17:00:50.066794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:00:50.066799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:00:50.066806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:00:50.066811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:00:50.066816 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:00:50.066822 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:00:50.066828 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:00:50.066833 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:00:50.066838 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:00:50.066844 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:00:50.066849 systemd[1]: Reached target paths.target - Path Units. May 27 17:00:50.066854 systemd[1]: Reached target slices.target - Slice Units. May 27 17:00:50.066859 systemd[1]: Reached target swap.target - Swaps. May 27 17:00:50.066865 systemd[1]: Reached target timers.target - Timer Units. May 27 17:00:50.066870 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:00:50.066875 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:00:50.066880 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:00:50.066885 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:00:50.066891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:00:50.066896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:00:50.066901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:00:50.066906 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:00:50.066912 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:00:50.066917 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:00:50.066922 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:00:50.066927 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:00:50.066934 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:00:50.066939 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:00:50.066944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:00:50.066963 systemd-journald[224]: Collecting audit messages is disabled. May 27 17:00:50.066979 systemd-journald[224]: Journal started May 27 17:00:50.066994 systemd-journald[224]: Runtime Journal (/run/log/journal/ef4ba6beaf1b4b26b919c37252733752) is 8M, max 78.5M, 70.5M free. May 27 17:00:50.074660 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:00:50.080133 systemd-modules-load[226]: Inserted module 'overlay' May 27 17:00:50.099984 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:00:50.102589 systemd-modules-load[226]: Inserted module 'br_netfilter' May 27 17:00:50.109943 kernel: Bridge firewalling registered May 27 17:00:50.109967 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:00:50.114212 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:00:50.118486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:00:50.127271 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:00:50.135501 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:00:50.143087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:00:50.153848 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:00:50.168174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:00:50.178788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:00:50.190026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:00:50.203700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:00:50.213720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:00:50.225210 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:00:50.227750 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:00:50.230585 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:00:50.243227 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:00:50.275771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:00:50.281973 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:00:50.305565 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:00:50.311530 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:00:50.349996 systemd-resolved[260]: Positive Trust Anchors: May 27 17:00:50.350011 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:00:50.350032 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:00:50.355205 systemd-resolved[260]: Defaulting to hostname 'linux'. May 27 17:00:50.356250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:00:50.361559 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:00:50.433667 kernel: SCSI subsystem initialized May 27 17:00:50.440645 kernel: Loading iSCSI transport class v2.0-870. May 27 17:00:50.447639 kernel: iscsi: registered transport (tcp) May 27 17:00:50.460893 kernel: iscsi: registered transport (qla4xxx) May 27 17:00:50.460959 kernel: QLogic iSCSI HBA Driver May 27 17:00:50.474756 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:00:50.495647 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:00:50.502386 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:00:50.548134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:00:50.554768 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:00:50.617636 kernel: raid6: neonx8 gen() 18553 MB/s May 27 17:00:50.636635 kernel: raid6: neonx4 gen() 18572 MB/s May 27 17:00:50.655646 kernel: raid6: neonx2 gen() 17083 MB/s May 27 17:00:50.675650 kernel: raid6: neonx1 gen() 15026 MB/s May 27 17:00:50.694636 kernel: raid6: int64x8 gen() 10517 MB/s May 27 17:00:50.713644 kernel: raid6: int64x4 gen() 10612 MB/s May 27 17:00:50.733642 kernel: raid6: int64x2 gen() 8979 MB/s May 27 17:00:50.754959 kernel: raid6: int64x1 gen() 7001 MB/s May 27 17:00:50.755028 kernel: raid6: using algorithm neonx4 gen() 18572 MB/s May 27 17:00:50.776714 kernel: raid6: .... xor() 15149 MB/s, rmw enabled May 27 17:00:50.776766 kernel: raid6: using neon recovery algorithm May 27 17:00:50.784660 kernel: xor: measuring software checksum speed May 27 17:00:50.784725 kernel: 8regs : 28485 MB/sec May 27 17:00:50.788016 kernel: 32regs : 28633 MB/sec May 27 17:00:50.790356 kernel: arm64_neon : 37514 MB/sec May 27 17:00:50.793154 kernel: xor: using function: arm64_neon (37514 MB/sec) May 27 17:00:50.833644 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:00:50.838954 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:00:50.848790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:00:50.869425 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 27 17:00:50.873542 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:00:50.883793 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:00:50.912383 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation May 27 17:00:50.933600 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:00:50.938987 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:00:50.989417 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:00:50.997839 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:00:51.060627 kernel: hv_vmbus: Vmbus version:5.3 May 27 17:00:51.082133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:00:51.097243 kernel: hv_vmbus: registering driver hyperv_keyboard May 27 17:00:51.097263 kernel: hv_vmbus: registering driver hid_hyperv May 27 17:00:51.097270 kernel: pps_core: LinuxPPS API ver. 1 registered May 27 17:00:51.097277 kernel: hv_vmbus: registering driver hv_storvsc May 27 17:00:51.097285 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 27 17:00:51.097291 kernel: scsi host0: storvsc_host_t May 27 17:00:51.097474 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 27 17:00:51.082241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:00:51.150478 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 27 17:00:51.150499 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 27 17:00:51.150695 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 27 17:00:51.150822 kernel: hv_vmbus: registering driver hv_netvsc May 27 17:00:51.150830 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 27 17:00:51.150910 kernel: scsi host1: storvsc_host_t May 27 17:00:51.150971 kernel: PTP clock support registered May 27 17:00:51.150977 kernel: hv_utils: Registering HyperV Utility Driver May 27 17:00:51.118785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:00:51.172070 kernel: hv_vmbus: registering driver hv_utils May 27 17:00:51.172096 kernel: hv_utils: Heartbeat IC version 3.0 May 27 17:00:51.172103 kernel: hv_utils: Shutdown IC version 3.2 May 27 17:00:51.172109 kernel: hv_utils: TimeSync IC version 4.0 May 27 17:00:51.136960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:00:51.160013 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:00:51.030494 systemd-journald[224]: Time jumped backwards, rotating. May 27 17:00:51.172447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:00:51.044316 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 27 17:00:51.044477 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 27 17:00:51.044543 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 17:00:51.044606 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 27 17:00:51.172532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:00:51.076513 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 27 17:00:51.076686 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#197 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:00:51.076774 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#204 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:00:51.076833 kernel: hv_netvsc 002248b8-9ab3-0022-48b8-9ab3002248b8 eth0: VF slot 1 added May 27 17:00:51.017050 systemd-resolved[260]: Clock change detected. Flushing caches. May 27 17:00:51.038133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:00:51.092218 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:00:51.092268 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 17:00:51.094790 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 27 17:00:51.096305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:00:51.111280 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:00:51.111296 kernel: hv_vmbus: registering driver hv_pci May 27 17:00:51.111309 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 27 17:00:51.111454 kernel: hv_pci 462242a8-6bee-43e6-aa46-a1d9fc70d566: PCI VMBus probing: Using version 0x10004 May 27 17:00:51.126647 kernel: hv_pci 462242a8-6bee-43e6-aa46-a1d9fc70d566: PCI host bridge to bus 6bee:00 May 27 17:00:51.126855 kernel: pci_bus 6bee:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 27 17:00:51.126945 kernel: pci_bus 6bee:00: No busn resource found for root bus, will use [bus 00-ff] May 27 17:00:51.137381 kernel: pci 6bee:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint May 27 17:00:51.144453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:00:51.144667 kernel: pci 6bee:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] May 27 17:00:51.150389 kernel: pci 6bee:00:02.0: enabling Extended Tags May 27 17:00:51.168933 kernel: pci 6bee:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6bee:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) May 27 17:00:51.178500 kernel: pci_bus 6bee:00: busn_res: [bus 00-ff] end is updated to 00 May 27 17:00:51.178716 kernel: pci 6bee:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned May 27 17:00:51.194387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#80 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:00:51.249170 kernel: mlx5_core 6bee:00:02.0: enabling device (0000 -> 0002) May 27 17:00:51.257078 kernel: mlx5_core 6bee:00:02.0: PTM is not supported by PCIe May 27 17:00:51.257194 kernel: mlx5_core 6bee:00:02.0: firmware version: 16.30.5006 May 27 17:00:51.424416 kernel: hv_netvsc 002248b8-9ab3-0022-48b8-9ab3002248b8 eth0: VF registering: eth1 May 27 17:00:51.424630 kernel: mlx5_core 6bee:00:02.0 eth1: joined to eth0 May 27 17:00:51.431072 kernel: mlx5_core 6bee:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 27 17:00:51.446371 kernel: mlx5_core 6bee:00:02.0 enP27630s1: renamed from eth1 May 27 17:00:51.737596 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 27 17:00:51.762935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 27 17:00:51.778809 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:00:51.791267 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 27 17:00:51.796656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 27 17:00:51.813636 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 27 17:00:51.822877 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:00:51.828893 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:00:51.839110 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:00:51.853531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:00:51.862510 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:00:51.891364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#121 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:00:51.892856 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:00:51.908380 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:00:51.916357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:00:51.927467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:00:52.934702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:00:52.945520 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:00:52.945977 disk-uuid[658]: The operation has completed successfully. May 27 17:00:53.016947 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:00:53.019121 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:00:53.040320 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:00:53.061790 sh[819]: Success May 27 17:00:53.099573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:00:53.099661 kernel: device-mapper: uevent: version 1.0.3 May 27 17:00:53.104708 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:00:53.114378 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 17:00:53.304949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:00:53.313523 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:00:53.329907 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:00:53.354432 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:00:53.354499 kernel: BTRFS: device fsid 3c8c76ef-f1da-40fe-979d-11bdf765e403 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (837) May 27 17:00:53.360051 kernel: BTRFS info (device dm-0): first mount of filesystem 3c8c76ef-f1da-40fe-979d-11bdf765e403 May 27 17:00:53.365283 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 17:00:53.368426 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:00:53.651959 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:00:53.656007 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:00:53.663393 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:00:53.664168 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:00:53.685227 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:00:53.711486 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (860) May 27 17:00:53.723335 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:00:53.724308 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:00:53.724316 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:00:53.774371 kernel: BTRFS info (device sda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:00:53.776635 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:00:53.784157 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:00:53.816003 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:00:53.827002 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:00:53.859062 systemd-networkd[1006]: lo: Link UP May 27 17:00:53.861554 systemd-networkd[1006]: lo: Gained carrier May 27 17:00:53.862919 systemd-networkd[1006]: Enumeration completed May 27 17:00:53.863097 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:00:53.864636 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:00:53.864639 systemd-networkd[1006]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:00:53.872147 systemd[1]: Reached target network.target - Network. May 27 17:00:53.936360 kernel: mlx5_core 6bee:00:02.0 enP27630s1: Link up May 27 17:00:53.966015 systemd-networkd[1006]: enP27630s1: Link UP May 27 17:00:53.969274 kernel: hv_netvsc 002248b8-9ab3-0022-48b8-9ab3002248b8 eth0: Data path switched to VF: enP27630s1 May 27 17:00:53.966079 systemd-networkd[1006]: eth0: Link UP May 27 17:00:53.966170 systemd-networkd[1006]: eth0: Gained carrier May 27 17:00:53.966180 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:00:53.975625 systemd-networkd[1006]: enP27630s1: Gained carrier May 27 17:00:53.995392 systemd-networkd[1006]: eth0: DHCPv4 address 10.200.20.45/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:00:55.048442 ignition[979]: Ignition 2.21.0 May 27 17:00:55.049391 ignition[979]: Stage: fetch-offline May 27 17:00:55.052710 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:00:55.049551 ignition[979]: no configs at "/usr/lib/ignition/base.d" May 27 17:00:55.061269 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:00:55.049568 ignition[979]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:00:55.049698 ignition[979]: parsed url from cmdline: "" May 27 17:00:55.049701 ignition[979]: no config URL provided May 27 17:00:55.049704 ignition[979]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:00:55.049710 ignition[979]: no config at "/usr/lib/ignition/user.ign" May 27 17:00:55.049714 ignition[979]: failed to fetch config: resource requires networking May 27 17:00:55.049875 ignition[979]: Ignition finished successfully May 27 17:00:55.094087 ignition[1016]: Ignition 2.21.0 May 27 17:00:55.094093 ignition[1016]: Stage: fetch May 27 17:00:55.094378 ignition[1016]: no configs at "/usr/lib/ignition/base.d" May 27 17:00:55.094388 ignition[1016]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:00:55.094479 ignition[1016]: parsed url from cmdline: "" May 27 17:00:55.094481 ignition[1016]: no config URL provided May 27 17:00:55.094486 ignition[1016]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:00:55.094496 ignition[1016]: no config at "/usr/lib/ignition/user.ign" May 27 17:00:55.094535 ignition[1016]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 27 17:00:55.131461 systemd-networkd[1006]: enP27630s1: Gained IPv6LL May 27 17:00:55.168934 ignition[1016]: GET result: OK May 27 17:00:55.169052 ignition[1016]: config has been read from IMDS userdata May 27 17:00:55.169071 ignition[1016]: parsing config with SHA512: 929f3dfec7813de48572dfbf032d14ff8b2d45d24918e966a8040e1a5a286fc5befc7fe5b3917b5e95a90b34e22b325ada94624140f75d29a3a0b164838e5004 May 27 17:00:55.175715 unknown[1016]: fetched base config from "system" May 27 17:00:55.175727 unknown[1016]: fetched base config from "system" May 27 17:00:55.176017 ignition[1016]: fetch: fetch complete May 27 17:00:55.175732 unknown[1016]: fetched user config from "azure" May 27 17:00:55.176022 ignition[1016]: fetch: fetch passed May 27 17:00:55.178216 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:00:55.176073 ignition[1016]: Ignition finished successfully May 27 17:00:55.185503 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:00:55.221246 ignition[1023]: Ignition 2.21.0 May 27 17:00:55.221263 ignition[1023]: Stage: kargs May 27 17:00:55.221479 ignition[1023]: no configs at "/usr/lib/ignition/base.d" May 27 17:00:55.229383 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:00:55.221486 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:00:55.236914 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:00:55.222201 ignition[1023]: kargs: kargs passed May 27 17:00:55.222250 ignition[1023]: Ignition finished successfully May 27 17:00:55.261960 ignition[1030]: Ignition 2.21.0 May 27 17:00:55.261977 ignition[1030]: Stage: disks May 27 17:00:55.265671 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:00:55.262161 ignition[1030]: no configs at "/usr/lib/ignition/base.d" May 27 17:00:55.271433 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:00:55.262168 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:00:55.275911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:00:55.262857 ignition[1030]: disks: disks passed May 27 17:00:55.285528 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:00:55.262918 ignition[1030]: Ignition finished successfully May 27 17:00:55.292525 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:00:55.301649 systemd[1]: Reached target basic.target - Basic System. May 27 17:00:55.310156 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:00:55.395479 systemd-fsck[1039]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 27 17:00:55.402573 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:00:55.408720 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:00:55.515728 systemd-networkd[1006]: eth0: Gained IPv6LL May 27 17:00:55.628366 kernel: EXT4-fs (sda9): mounted filesystem a5483afc-8426-4c3e-85ef-8146f9077e7d r/w with ordered data mode. Quota mode: none. May 27 17:00:55.629762 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:00:55.634688 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:00:55.657880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:00:55.665678 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:00:55.675110 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 17:00:55.684967 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:00:55.686746 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:00:55.699204 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:00:55.707714 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:00:55.724379 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1053) May 27 17:00:55.733889 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:00:55.733940 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:00:55.737104 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:00:55.753488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:00:56.196736 coreos-metadata[1055]: May 27 17:00:56.196 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:00:56.204541 coreos-metadata[1055]: May 27 17:00:56.204 INFO Fetch successful May 27 17:00:56.208425 coreos-metadata[1055]: May 27 17:00:56.208 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 27 17:00:56.224736 coreos-metadata[1055]: May 27 17:00:56.224 INFO Fetch successful May 27 17:00:56.240921 coreos-metadata[1055]: May 27 17:00:56.240 INFO wrote hostname ci-4344.0.0-a-efe79b1159 to /sysroot/etc/hostname May 27 17:00:56.248804 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:00:56.578268 initrd-setup-root[1083]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:00:56.630845 initrd-setup-root[1090]: cut: /sysroot/etc/group: No such file or directory May 27 17:00:56.638949 initrd-setup-root[1097]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:00:56.644304 initrd-setup-root[1104]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:00:57.615164 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:00:57.621332 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:00:57.641158 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:00:57.652102 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:00:57.662294 kernel: BTRFS info (device sda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:00:57.679158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:00:57.691367 ignition[1173]: INFO : Ignition 2.21.0 May 27 17:00:57.691367 ignition[1173]: INFO : Stage: mount May 27 17:00:57.691367 ignition[1173]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:00:57.691367 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:00:57.718453 ignition[1173]: INFO : mount: mount passed May 27 17:00:57.718453 ignition[1173]: INFO : Ignition finished successfully May 27 17:00:57.697950 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:00:57.705591 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:00:57.729591 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:00:57.755630 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1184) May 27 17:00:57.755686 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:00:57.760372 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:00:57.763329 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:00:57.779011 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:00:57.805372 ignition[1201]: INFO : Ignition 2.21.0 May 27 17:00:57.805372 ignition[1201]: INFO : Stage: files May 27 17:00:57.805372 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:00:57.805372 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:00:57.821051 ignition[1201]: DEBUG : files: compiled without relabeling support, skipping May 27 17:00:57.825803 ignition[1201]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:00:57.825803 ignition[1201]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:00:57.860200 ignition[1201]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:00:57.865533 ignition[1201]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:00:57.865533 ignition[1201]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:00:57.860639 unknown[1201]: wrote ssh authorized keys file for user: core May 27 17:00:57.917668 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 17:00:57.925360 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 27 17:00:58.002531 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:00:58.921734 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 17:00:58.929480 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:00:58.929480 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 27 17:00:59.359377 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 17:00:59.422187 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:00:59.429555 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:00:59.485113 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:00:59.485113 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:00:59.485113 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:00:59.485113 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:00:59.485113 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:00:59.527327 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 27 17:01:00.187815 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 17:01:00.377394 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:01:00.377394 ignition[1201]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 17:01:00.409454 ignition[1201]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:01:00.423533 ignition[1201]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:01:00.423533 ignition[1201]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 17:01:00.423533 ignition[1201]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 17:01:00.451502 ignition[1201]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:01:00.451502 ignition[1201]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:01:00.451502 ignition[1201]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:01:00.451502 ignition[1201]: INFO : files: files passed May 27 17:01:00.451502 ignition[1201]: INFO : Ignition finished successfully May 27 17:01:00.432432 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:01:00.442530 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:01:00.466481 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:01:00.476635 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:01:00.487921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:01:00.510407 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:01:00.506843 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:01:00.535909 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:01:00.535909 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:01:00.515673 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:01:00.526146 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:01:00.593316 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:01:00.593454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:01:00.602163 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:01:00.610777 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:01:00.618406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:01:00.619258 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:01:00.656422 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:01:00.664516 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:01:00.688879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:01:00.693519 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:01:00.702865 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:01:00.711088 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:01:00.711201 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:01:00.723508 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:01:00.732232 systemd[1]: Stopped target basic.target - Basic System. May 27 17:01:00.739733 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:01:00.747073 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:01:00.755477 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:01:00.763813 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:01:00.772091 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:01:00.780189 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:01:00.788474 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:01:00.797337 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:01:00.804738 systemd[1]: Stopped target swap.target - Swaps. May 27 17:01:00.811606 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:01:00.811717 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:01:00.822087 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:01:00.826729 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:01:00.835116 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:01:00.838362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:01:00.843793 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:01:00.843895 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:01:00.855839 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:01:00.855939 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:01:00.861095 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:01:00.861170 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:01:00.870073 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 17:01:00.870149 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:01:00.882557 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:01:00.945363 ignition[1255]: INFO : Ignition 2.21.0 May 27 17:01:00.945363 ignition[1255]: INFO : Stage: umount May 27 17:01:00.945363 ignition[1255]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:01:00.945363 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:01:00.945363 ignition[1255]: INFO : umount: umount passed May 27 17:01:00.945363 ignition[1255]: INFO : Ignition finished successfully May 27 17:01:00.890486 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:01:00.903369 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:01:00.903539 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:01:00.911424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:01:00.911561 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:01:00.934000 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:01:00.934154 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:01:00.941708 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:01:00.941793 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:01:00.949573 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:01:00.949622 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:01:00.955921 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:01:00.955960 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:01:00.962284 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:01:00.962328 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:01:00.984094 systemd[1]: Stopped target network.target - Network. May 27 17:01:00.997648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:01:00.997735 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:01:01.007953 systemd[1]: Stopped target paths.target - Path Units. May 27 17:01:01.015304 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:01:01.021397 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:01:01.028001 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:01:01.036902 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:01:01.044381 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:01:01.044440 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:01:01.052347 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:01:01.052395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:01:01.060378 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:01:01.060439 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:01:01.068237 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:01:01.068267 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:01:01.076407 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:01:01.087660 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:01:01.102288 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:01:01.102840 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:01:01.102943 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:01:01.115335 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:01:01.115572 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:01:01.115679 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:01:01.126233 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:01:01.127812 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:01:01.136996 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:01:01.287116 kernel: hv_netvsc 002248b8-9ab3-0022-48b8-9ab3002248b8 eth0: Data path switched from VF: enP27630s1 May 27 17:01:01.137044 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:01:01.146505 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:01:01.159799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:01:01.159878 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:01:01.169023 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:01:01.169067 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:01:01.181038 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:01:01.181103 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:01:01.190195 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:01:01.190257 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:01:01.202930 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:01:01.212331 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:01:01.212429 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:01:01.236163 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:01:01.236334 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:01:01.245467 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:01:01.245512 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:01:01.253484 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:01:01.253514 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:01:01.261057 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:01:01.261109 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:01:01.273847 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:01:01.273899 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:01:01.287196 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:01:01.287258 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:01:01.297850 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:01:01.307000 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:01:01.307074 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:01:01.320993 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:01:01.321048 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:01:01.331178 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:01:01.331244 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:01:01.343994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:01:01.344057 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:01:01.348864 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:01:01.348904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:01.362100 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:01:01.362155 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:01:01.362179 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:01:01.362209 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:01:01.362512 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:01:01.364365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:01:01.375656 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:01:01.375749 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:01:01.476155 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:01:01.476301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:01:01.486964 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:01:01.494507 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:01:01.494604 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:01:01.515533 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:01:01.544107 systemd[1]: Switching root. May 27 17:01:01.604063 systemd-journald[224]: Journal stopped May 27 17:01:08.024677 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). May 27 17:01:08.024698 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:01:08.024707 kernel: SELinux: policy capability open_perms=1 May 27 17:01:08.024713 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:01:08.024718 kernel: SELinux: policy capability always_check_network=0 May 27 17:01:08.024723 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:01:08.024729 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:01:08.024735 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:01:08.024740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:01:08.024745 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:01:08.024751 kernel: audit: type=1403 audit(1748365262.835:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:01:08.024757 systemd[1]: Successfully loaded SELinux policy in 154.706ms. May 27 17:01:08.024764 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.312ms. May 27 17:01:08.024771 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:01:08.024777 systemd[1]: Detected virtualization microsoft. May 27 17:01:08.024784 systemd[1]: Detected architecture arm64. May 27 17:01:08.024790 systemd[1]: Detected first boot. May 27 17:01:08.024796 systemd[1]: Hostname set to . May 27 17:01:08.024802 systemd[1]: Initializing machine ID from random generator. May 27 17:01:08.024808 zram_generator::config[1298]: No configuration found. May 27 17:01:08.024815 kernel: NET: Registered PF_VSOCK protocol family May 27 17:01:08.024820 systemd[1]: Populated /etc with preset unit settings. May 27 17:01:08.024828 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:01:08.024834 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:01:08.024840 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:01:08.024846 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:01:08.024852 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:01:08.024860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:01:08.024866 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:01:08.024872 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:01:08.024879 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:01:08.024885 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:01:08.024891 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:01:08.024897 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:01:08.024903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:01:08.024909 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:01:08.024915 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:01:08.024922 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:01:08.024928 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:01:08.024934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:01:08.024942 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 17:01:08.024948 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:01:08.024954 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:01:08.024960 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:01:08.024966 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:01:08.024973 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:01:08.024979 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:01:08.024985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:01:08.024992 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:01:08.024998 systemd[1]: Reached target slices.target - Slice Units. May 27 17:01:08.025004 systemd[1]: Reached target swap.target - Swaps. May 27 17:01:08.025010 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:01:08.025017 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:01:08.025024 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:01:08.025030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:01:08.025036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:01:08.025043 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:01:08.025049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:01:08.025056 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:01:08.025062 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:01:08.025068 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:01:08.025075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:01:08.025081 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:01:08.025087 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:01:08.025093 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:01:08.025100 systemd[1]: Reached target machines.target - Containers. May 27 17:01:08.025107 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:01:08.025113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:01:08.025119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:01:08.025126 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:01:08.025132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:01:08.025138 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:01:08.025145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:01:08.025151 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:01:08.025158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:01:08.025164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:01:08.025170 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:01:08.025176 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:01:08.025182 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:01:08.025189 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:01:08.025195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:01:08.025201 kernel: fuse: init (API version 7.41) May 27 17:01:08.025208 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:01:08.025214 kernel: loop: module loaded May 27 17:01:08.025220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:01:08.025226 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:01:08.025232 kernel: ACPI: bus type drm_connector registered May 27 17:01:08.025238 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:01:08.025256 systemd-journald[1402]: Collecting audit messages is disabled. May 27 17:01:08.025273 systemd-journald[1402]: Journal started May 27 17:01:08.025288 systemd-journald[1402]: Runtime Journal (/run/log/journal/2f2de69b73474701b28a1251242f76f9) is 8M, max 78.5M, 70.5M free. May 27 17:01:07.228952 systemd[1]: Queued start job for default target multi-user.target. May 27 17:01:07.236917 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 17:01:07.237384 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:01:07.237697 systemd[1]: systemd-journald.service: Consumed 2.382s CPU time. May 27 17:01:08.042581 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:01:08.053376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:01:08.053450 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:01:08.059453 systemd[1]: Stopped verity-setup.service. May 27 17:01:08.072688 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:01:08.073410 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:01:08.077471 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:01:08.082066 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:01:08.085774 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:01:08.090058 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:01:08.094417 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:01:08.098333 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:01:08.103260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:01:08.108336 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:01:08.110384 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:01:08.115513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:01:08.115650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:01:08.120140 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:01:08.120265 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:01:08.124441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:01:08.124575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:01:08.130171 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:01:08.130305 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:01:08.135030 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:01:08.135174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:01:08.139935 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:01:08.145091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:01:08.150297 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:01:08.156060 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:01:08.161104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:01:08.174799 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:01:08.180403 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:01:08.192490 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:01:08.197610 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:01:08.197649 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:01:08.202244 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:01:08.208144 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:01:08.211838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:01:08.218219 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:01:08.223594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:01:08.228102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:01:08.229078 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:01:08.233221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:01:08.235518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:01:08.242528 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:01:08.249212 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:01:08.258333 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:01:08.263404 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:01:08.272967 systemd-journald[1402]: Time spent on flushing to /var/log/journal/2f2de69b73474701b28a1251242f76f9 is 49.842ms for 946 entries. May 27 17:01:08.272967 systemd-journald[1402]: System Journal (/var/log/journal/2f2de69b73474701b28a1251242f76f9) is 11.8M, max 2.6G, 2.6G free. May 27 17:01:08.384371 systemd-journald[1402]: Received client request to flush runtime journal. May 27 17:01:08.384420 systemd-journald[1402]: /var/log/journal/2f2de69b73474701b28a1251242f76f9/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 27 17:01:08.384437 systemd-journald[1402]: Rotating system journal. May 27 17:01:08.384456 kernel: loop0: detected capacity change from 0 to 138376 May 27 17:01:08.271847 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:01:08.282284 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:01:08.288616 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:01:08.387386 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:01:08.396169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:01:08.403324 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:01:08.404152 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:01:08.418646 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. May 27 17:01:08.418977 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. May 27 17:01:08.422622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:01:08.430490 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:01:08.843943 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:01:08.853540 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:01:08.869363 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:01:08.877330 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. May 27 17:01:08.877737 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. May 27 17:01:08.882388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:01:08.920367 kernel: loop1: detected capacity change from 0 to 211168 May 27 17:01:08.958884 kernel: loop2: detected capacity change from 0 to 107312 May 27 17:01:09.325377 kernel: loop3: detected capacity change from 0 to 28936 May 27 17:01:09.521691 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:01:09.528249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:01:09.556563 systemd-udevd[1465]: Using default interface naming scheme 'v255'. May 27 17:01:09.658371 kernel: loop4: detected capacity change from 0 to 138376 May 27 17:01:09.667375 kernel: loop5: detected capacity change from 0 to 211168 May 27 17:01:09.675380 kernel: loop6: detected capacity change from 0 to 107312 May 27 17:01:09.682377 kernel: loop7: detected capacity change from 0 to 28936 May 27 17:01:09.684183 (sd-merge)[1467]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 27 17:01:09.684659 (sd-merge)[1467]: Merged extensions into '/usr'. May 27 17:01:09.687683 systemd[1]: Reload requested from client PID 1437 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:01:09.687827 systemd[1]: Reloading... May 27 17:01:09.740370 zram_generator::config[1492]: No configuration found. May 27 17:01:09.818116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:01:09.898147 systemd[1]: Reloading finished in 209 ms. May 27 17:01:09.913703 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:01:09.925511 systemd[1]: Starting ensure-sysext.service... May 27 17:01:09.933614 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:01:09.949507 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:01:09.949535 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:01:09.949718 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:01:09.949853 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:01:09.950259 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:01:09.950436 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. May 27 17:01:09.950469 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. May 27 17:01:09.972990 systemd[1]: Reload requested from client PID 1548 ('systemctl') (unit ensure-sysext.service)... May 27 17:01:09.973142 systemd[1]: Reloading... May 27 17:01:09.983222 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:01:09.983235 systemd-tmpfiles[1549]: Skipping /boot May 27 17:01:09.992994 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:01:09.993009 systemd-tmpfiles[1549]: Skipping /boot May 27 17:01:10.025576 zram_generator::config[1575]: No configuration found. May 27 17:01:10.108864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:01:10.176670 systemd[1]: Reloading finished in 203 ms. May 27 17:01:10.183631 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:01:10.198939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:01:10.223906 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:01:10.246625 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:01:10.256952 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:01:10.267844 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:01:10.288036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:01:10.296759 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:01:10.310434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:01:10.314474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:01:10.321854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:01:10.332592 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:01:10.338531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:01:10.338662 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:01:10.342092 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... May 27 17:01:10.348286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:01:10.356699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:01:10.362749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:01:10.362882 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:01:10.362998 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:01:10.371851 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:01:10.380588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:01:10.380764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:01:10.389035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:01:10.390429 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:01:10.399062 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:01:10.399396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:01:10.408873 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:01:10.411389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:01:10.425449 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:01:10.436978 systemd[1]: Finished ensure-sysext.service. May 27 17:01:10.445897 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 17:01:10.450689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:01:10.450802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:01:10.454329 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:01:10.466395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#315 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:01:10.476489 augenrules[1714]: No rules May 27 17:01:10.477986 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:01:10.478214 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:01:10.483479 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:01:10.485552 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:01:10.545625 kernel: hv_vmbus: registering driver hv_balloon May 27 17:01:10.545738 kernel: hv_vmbus: registering driver hyperv_fb May 27 17:01:10.545752 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 27 17:01:10.549056 kernel: hv_balloon: Memory hot add disabled on ARM64 May 27 17:01:10.554983 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. May 27 17:01:10.569244 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 27 17:01:10.569333 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 27 17:01:10.578164 kernel: Console: switching to colour dummy device 80x25 May 27 17:01:10.581368 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:01:10.594768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:01:10.610572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:01:10.610776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:10.622379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:01:10.686861 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:01:10.687094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:10.702381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:01:10.737070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 27 17:01:10.747534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:01:10.829713 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:01:10.835611 systemd-resolved[1669]: Positive Trust Anchors: May 27 17:01:10.835998 systemd-resolved[1669]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:01:10.836075 systemd-resolved[1669]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:01:10.839927 systemd-resolved[1669]: Using system hostname 'ci-4344.0.0-a-efe79b1159'. May 27 17:01:10.841593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:01:10.846115 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:01:10.855370 kernel: MACsec IEEE 802.1AE May 27 17:01:10.973761 systemd-networkd[1668]: lo: Link UP May 27 17:01:10.973771 systemd-networkd[1668]: lo: Gained carrier May 27 17:01:10.976638 systemd-networkd[1668]: Enumeration completed May 27 17:01:10.976779 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:01:10.977360 systemd-networkd[1668]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:01:10.977368 systemd-networkd[1668]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:01:10.981507 systemd[1]: Reached target network.target - Network. May 27 17:01:10.986540 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:01:10.992272 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:01:11.039379 kernel: mlx5_core 6bee:00:02.0 enP27630s1: Link up May 27 17:01:11.102438 kernel: hv_netvsc 002248b8-9ab3-0022-48b8-9ab3002248b8 eth0: Data path switched to VF: enP27630s1 May 27 17:01:11.103530 systemd-networkd[1668]: enP27630s1: Link UP May 27 17:01:11.103636 systemd-networkd[1668]: eth0: Link UP May 27 17:01:11.103639 systemd-networkd[1668]: eth0: Gained carrier May 27 17:01:11.103659 systemd-networkd[1668]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:01:11.105580 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:01:11.110841 systemd-networkd[1668]: enP27630s1: Gained carrier May 27 17:01:11.123399 systemd-networkd[1668]: eth0: DHCPv4 address 10.200.20.45/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:01:11.131481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:11.428487 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:01:11.433818 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:01:12.411587 systemd-networkd[1668]: enP27630s1: Gained IPv6LL May 27 17:01:13.115478 systemd-networkd[1668]: eth0: Gained IPv6LL May 27 17:01:13.117489 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:01:13.123172 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:01:15.045054 ldconfig[1432]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:01:15.056626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:01:15.063533 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:01:15.101874 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:01:15.106477 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:01:15.110507 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:01:15.115753 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:01:15.120851 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:01:15.125069 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:01:15.130232 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:01:15.135016 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:01:15.135046 systemd[1]: Reached target paths.target - Path Units. May 27 17:01:15.138489 systemd[1]: Reached target timers.target - Timer Units. May 27 17:01:15.143418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:01:15.149278 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:01:15.154898 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:01:15.159948 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:01:15.165125 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:01:15.180136 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:01:15.184908 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:01:15.190159 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:01:15.194848 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:01:15.198221 systemd[1]: Reached target basic.target - Basic System. May 27 17:01:15.201757 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:01:15.201784 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:01:15.203988 systemd[1]: Starting chronyd.service - NTP client/server... May 27 17:01:15.215458 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:01:15.220619 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:01:15.227524 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:01:15.237166 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:01:15.245659 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:01:15.250606 (chronyd)[1829]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 27 17:01:15.253408 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:01:15.260468 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:01:15.262529 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 27 17:01:15.266898 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 27 17:01:15.268072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:15.276179 KVP[1839]: KVP starting; pid is:1839 May 27 17:01:15.276446 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:01:15.278573 chronyd[1844]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 27 17:01:15.283975 KVP[1839]: KVP LIC Version: 3.1 May 27 17:01:15.284358 kernel: hv_utils: KVP IC version 4.0 May 27 17:01:15.286009 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:01:15.294661 jq[1837]: false May 27 17:01:15.295317 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:01:15.301522 chronyd[1844]: Timezone right/UTC failed leap second check, ignoring May 27 17:01:15.301692 chronyd[1844]: Loaded seccomp filter (level 2) May 27 17:01:15.308795 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:01:15.316515 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:01:15.328497 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:01:15.335484 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:01:15.335980 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:01:15.337339 extend-filesystems[1838]: Found loop4 May 27 17:01:15.346302 extend-filesystems[1838]: Found loop5 May 27 17:01:15.346302 extend-filesystems[1838]: Found loop6 May 27 17:01:15.346302 extend-filesystems[1838]: Found loop7 May 27 17:01:15.346302 extend-filesystems[1838]: Found sda May 27 17:01:15.346302 extend-filesystems[1838]: Found sda1 May 27 17:01:15.346302 extend-filesystems[1838]: Found sda2 May 27 17:01:15.346302 extend-filesystems[1838]: Found sda3 May 27 17:01:15.346302 extend-filesystems[1838]: Found usr May 27 17:01:15.346302 extend-filesystems[1838]: Found sda4 May 27 17:01:15.346302 extend-filesystems[1838]: Found sda6 May 27 17:01:15.346302 extend-filesystems[1838]: Found sda7 May 27 17:01:15.346302 extend-filesystems[1838]: Found sda9 May 27 17:01:15.346302 extend-filesystems[1838]: Checking size of /dev/sda9 May 27 17:01:15.340233 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:01:15.443433 extend-filesystems[1838]: Old size kept for /dev/sda9 May 27 17:01:15.443433 extend-filesystems[1838]: Found sr0 May 27 17:01:15.357384 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:01:15.368612 systemd[1]: Started chronyd.service - NTP client/server. May 27 17:01:15.474413 update_engine[1859]: I20250527 17:01:15.446657 1859 main.cc:92] Flatcar Update Engine starting May 27 17:01:15.383612 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:01:15.474666 jq[1863]: true May 27 17:01:15.396110 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:01:15.405497 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:01:15.405919 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:01:15.406068 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:01:15.418779 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:01:15.420537 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:01:15.439278 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:01:15.450990 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:01:15.451185 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:01:15.485314 systemd-logind[1856]: New seat seat0. May 27 17:01:15.488909 systemd-logind[1856]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 27 17:01:15.490191 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:01:15.500071 (ntainerd)[1886]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:01:15.513496 jq[1885]: true May 27 17:01:15.536384 tar[1882]: linux-arm64/LICENSE May 27 17:01:15.536384 tar[1882]: linux-arm64/helm May 27 17:01:15.602858 dbus-daemon[1832]: [system] SELinux support is enabled May 27 17:01:15.603032 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:01:15.610779 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:01:15.610815 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:01:15.616561 update_engine[1859]: I20250527 17:01:15.616163 1859 update_check_scheduler.cc:74] Next update check in 3m32s May 27 17:01:15.617834 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:01:15.617859 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:01:15.625822 systemd[1]: Started update-engine.service - Update Engine. May 27 17:01:15.629577 dbus-daemon[1832]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 17:01:15.633606 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:01:15.695438 bash[1952]: Updated "/home/core/.ssh/authorized_keys" May 27 17:01:15.690210 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:01:15.697695 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:01:15.712040 coreos-metadata[1831]: May 27 17:01:15.711 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:01:15.715748 coreos-metadata[1831]: May 27 17:01:15.715 INFO Fetch successful May 27 17:01:15.715906 coreos-metadata[1831]: May 27 17:01:15.715 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 27 17:01:15.720964 coreos-metadata[1831]: May 27 17:01:15.720 INFO Fetch successful May 27 17:01:15.724239 coreos-metadata[1831]: May 27 17:01:15.724 INFO Fetching http://168.63.129.16/machine/1b30f41e-85de-4332-b9c6-5c3625fb9b9e/c6741101%2D096d%2D43f6%2D9c53%2Dbe3a7129c5bc.%5Fci%2D4344.0.0%2Da%2Defe79b1159?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 27 17:01:15.724507 coreos-metadata[1831]: May 27 17:01:15.724 INFO Fetch successful May 27 17:01:15.724643 coreos-metadata[1831]: May 27 17:01:15.724 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 27 17:01:15.739325 coreos-metadata[1831]: May 27 17:01:15.739 INFO Fetch successful May 27 17:01:15.793820 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:01:15.805048 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:01:15.947314 locksmithd[1963]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:01:16.158434 sshd_keygen[1869]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:01:16.200118 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:01:16.208849 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:01:16.217042 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 27 17:01:16.245067 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:01:16.245656 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:01:16.255258 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:01:16.261510 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 27 17:01:16.293630 containerd[1886]: time="2025-05-27T17:01:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:01:16.296783 containerd[1886]: time="2025-05-27T17:01:16.296731292Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:01:16.300960 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:01:16.303607 tar[1882]: linux-arm64/README.md May 27 17:01:16.309470 containerd[1886]: time="2025-05-27T17:01:16.309150316Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.136µs" May 27 17:01:16.309588 containerd[1886]: time="2025-05-27T17:01:16.309571268Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:01:16.310553 containerd[1886]: time="2025-05-27T17:01:16.310516420Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:01:16.310839 containerd[1886]: time="2025-05-27T17:01:16.310808756Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:01:16.310941 containerd[1886]: time="2025-05-27T17:01:16.310928548Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:01:16.311006 containerd[1886]: time="2025-05-27T17:01:16.310995844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:01:16.311126 containerd[1886]: time="2025-05-27T17:01:16.311109044Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:01:16.311196 containerd[1886]: time="2025-05-27T17:01:16.311182980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:01:16.311509 containerd[1886]: time="2025-05-27T17:01:16.311485316Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:01:16.311572 containerd[1886]: time="2025-05-27T17:01:16.311560572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:01:16.311630 containerd[1886]: time="2025-05-27T17:01:16.311617700Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:01:16.311670 containerd[1886]: time="2025-05-27T17:01:16.311658244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:01:16.311799 containerd[1886]: time="2025-05-27T17:01:16.311784604Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:01:16.312056 containerd[1886]: time="2025-05-27T17:01:16.312034044Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:01:16.312151 containerd[1886]: time="2025-05-27T17:01:16.312138548Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:01:16.312211 containerd[1886]: time="2025-05-27T17:01:16.312179540Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:01:16.312305 containerd[1886]: time="2025-05-27T17:01:16.312253588Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:01:16.312568 containerd[1886]: time="2025-05-27T17:01:16.312551212Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:01:16.312708 containerd[1886]: time="2025-05-27T17:01:16.312695148Z" level=info msg="metadata content store policy set" policy=shared May 27 17:01:16.314331 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:01:16.322152 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 17:01:16.328146 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:01:16.332935 containerd[1886]: time="2025-05-27T17:01:16.332893540Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333071388Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333165876Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333180348Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333190580Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333197476Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333205652Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333219436Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333228036Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333234860Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333240340Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333252820Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333425060Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333443812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:01:16.334696 containerd[1886]: time="2025-05-27T17:01:16.333455356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333462908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333470204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333477828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333487404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333495148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333504364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333510780Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333517476Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333592100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333606540Z" level=info msg="Start snapshots syncer" May 27 17:01:16.334998 containerd[1886]: time="2025-05-27T17:01:16.333624172Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:01:16.335122 containerd[1886]: time="2025-05-27T17:01:16.333799996Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:01:16.335122 containerd[1886]: time="2025-05-27T17:01:16.333837164Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:01:16.335087 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.333900364Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334025076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334040724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334047956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334054788Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334062444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334070676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334077756Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334099972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334107804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334114428Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334142972Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334153556Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:01:16.335248 containerd[1886]: time="2025-05-27T17:01:16.334158556Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334164380Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334168748Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334174020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334180652Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334193012Z" level=info msg="runtime interface created" May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334196676Z" level=info msg="created NRI interface" May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334202684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334214380Z" level=info msg="Connect containerd service" May 27 17:01:16.335434 containerd[1886]: time="2025-05-27T17:01:16.334233860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:01:16.339962 containerd[1886]: time="2025-05-27T17:01:16.338989972Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:01:16.374277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:01:16.385290 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:01:16.642853 kubelet[2024]: E0527 17:01:16.642713 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:01:16.644924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:01:16.645040 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:01:16.645302 systemd[1]: kubelet.service: Consumed 570ms CPU time, 256.1M memory peak. May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.200903748Z" level=info msg="Start subscribing containerd event" May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.200942164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.200981700Z" level=info msg="Start recovering state" May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201007028Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201065364Z" level=info msg="Start event monitor" May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201076812Z" level=info msg="Start cni network conf syncer for default" May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201082180Z" level=info msg="Start streaming server" May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201090308Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201095228Z" level=info msg="runtime interface starting up..." May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201100156Z" level=info msg="starting plugins..." May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201109604Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:01:17.201370 containerd[1886]: time="2025-05-27T17:01:17.201231412Z" level=info msg="containerd successfully booted in 0.910117s" May 27 17:01:17.201470 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:01:17.209227 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:01:17.216001 systemd[1]: Startup finished in 1.664s (kernel) + 13.218s (initrd) + 14.528s (userspace) = 29.411s. May 27 17:01:17.451938 login[2015]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 27 17:01:17.452154 login[2014]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:17.468530 systemd-logind[1856]: New session 1 of user core. May 27 17:01:17.470164 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:01:17.473003 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:01:17.497011 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:01:17.499568 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:01:17.526945 (systemd)[2048]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:01:17.529640 systemd-logind[1856]: New session c1 of user core. May 27 17:01:17.767729 systemd[2048]: Queued start job for default target default.target. May 27 17:01:17.773158 systemd[2048]: Created slice app.slice - User Application Slice. May 27 17:01:17.773362 systemd[2048]: Reached target paths.target - Paths. May 27 17:01:17.773504 systemd[2048]: Reached target timers.target - Timers. May 27 17:01:17.774983 systemd[2048]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:01:17.784063 systemd[2048]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:01:17.784907 systemd[2048]: Reached target sockets.target - Sockets. May 27 17:01:17.785157 systemd[2048]: Reached target basic.target - Basic System. May 27 17:01:17.785270 systemd[2048]: Reached target default.target - Main User Target. May 27 17:01:17.785420 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:01:17.786511 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:01:17.786673 systemd[2048]: Startup finished in 251ms. May 27 17:01:18.016141 waagent[2007]: 2025-05-27T17:01:18.011983Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 27 17:01:18.016495 waagent[2007]: 2025-05-27T17:01:18.016351Z INFO Daemon Daemon OS: flatcar 4344.0.0 May 27 17:01:18.019689 waagent[2007]: 2025-05-27T17:01:18.019597Z INFO Daemon Daemon Python: 3.11.12 May 27 17:01:18.025084 waagent[2007]: 2025-05-27T17:01:18.022967Z INFO Daemon Daemon Run daemon May 27 17:01:18.025940 waagent[2007]: 2025-05-27T17:01:18.025900Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.0.0' May 27 17:01:18.032662 waagent[2007]: 2025-05-27T17:01:18.032606Z INFO Daemon Daemon Using waagent for provisioning May 27 17:01:18.036806 waagent[2007]: 2025-05-27T17:01:18.036761Z INFO Daemon Daemon Activate resource disk May 27 17:01:18.040462 waagent[2007]: 2025-05-27T17:01:18.040409Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 27 17:01:18.048677 waagent[2007]: 2025-05-27T17:01:18.048624Z INFO Daemon Daemon Found device: None May 27 17:01:18.051826 waagent[2007]: 2025-05-27T17:01:18.051776Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 27 17:01:18.059287 waagent[2007]: 2025-05-27T17:01:18.059241Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 27 17:01:18.069880 waagent[2007]: 2025-05-27T17:01:18.069827Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:01:18.075239 waagent[2007]: 2025-05-27T17:01:18.075198Z INFO Daemon Daemon Running default provisioning handler May 27 17:01:18.085715 waagent[2007]: 2025-05-27T17:01:18.085647Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 27 17:01:18.095390 waagent[2007]: 2025-05-27T17:01:18.095323Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 27 17:01:18.102212 waagent[2007]: 2025-05-27T17:01:18.102170Z INFO Daemon Daemon cloud-init is enabled: False May 27 17:01:18.105709 waagent[2007]: 2025-05-27T17:01:18.105679Z INFO Daemon Daemon Copying ovf-env.xml May 27 17:01:18.217366 waagent[2007]: 2025-05-27T17:01:18.214640Z INFO Daemon Daemon Successfully mounted dvd May 27 17:01:18.242077 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 27 17:01:18.248071 waagent[2007]: 2025-05-27T17:01:18.244526Z INFO Daemon Daemon Detect protocol endpoint May 27 17:01:18.248482 waagent[2007]: 2025-05-27T17:01:18.248424Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:01:18.252634 waagent[2007]: 2025-05-27T17:01:18.252575Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 27 17:01:18.257485 waagent[2007]: 2025-05-27T17:01:18.257428Z INFO Daemon Daemon Test for route to 168.63.129.16 May 27 17:01:18.262445 waagent[2007]: 2025-05-27T17:01:18.262383Z INFO Daemon Daemon Route to 168.63.129.16 exists May 27 17:01:18.266558 waagent[2007]: 2025-05-27T17:01:18.266504Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 27 17:01:18.315481 waagent[2007]: 2025-05-27T17:01:18.315366Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 27 17:01:18.320797 waagent[2007]: 2025-05-27T17:01:18.320763Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 27 17:01:18.324918 waagent[2007]: 2025-05-27T17:01:18.324867Z INFO Daemon Daemon Server preferred version:2015-04-05 May 27 17:01:18.453410 login[2015]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:18.458655 systemd-logind[1856]: New session 2 of user core. May 27 17:01:18.463653 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:01:18.475744 waagent[2007]: 2025-05-27T17:01:18.475653Z INFO Daemon Daemon Initializing goal state during protocol detection May 27 17:01:18.481598 waagent[2007]: 2025-05-27T17:01:18.481520Z INFO Daemon Daemon Forcing an update of the goal state. May 27 17:01:18.490572 waagent[2007]: 2025-05-27T17:01:18.490510Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:01:18.508117 waagent[2007]: 2025-05-27T17:01:18.508072Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 27 17:01:18.513064 waagent[2007]: 2025-05-27T17:01:18.513021Z INFO Daemon May 27 17:01:18.515236 waagent[2007]: 2025-05-27T17:01:18.515187Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6d8ccd1f-b248-49c7-ac6d-79e1bbcc890f eTag: 13950928609707167658 source: Fabric] May 27 17:01:18.523712 waagent[2007]: 2025-05-27T17:01:18.523670Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 27 17:01:18.529015 waagent[2007]: 2025-05-27T17:01:18.528977Z INFO Daemon May 27 17:01:18.531269 waagent[2007]: 2025-05-27T17:01:18.531237Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 27 17:01:18.540379 waagent[2007]: 2025-05-27T17:01:18.540329Z INFO Daemon Daemon Downloading artifacts profile blob May 27 17:01:18.615512 waagent[2007]: 2025-05-27T17:01:18.615374Z INFO Daemon Downloaded certificate {'thumbprint': '5D9CB40FDC3A02CFCA1BE09A71C7177068182B1C', 'hasPrivateKey': False} May 27 17:01:18.623272 waagent[2007]: 2025-05-27T17:01:18.623227Z INFO Daemon Downloaded certificate {'thumbprint': '0880EF3705AF53619EA7B7D54DF7992BBC62DE68', 'hasPrivateKey': True} May 27 17:01:18.630662 waagent[2007]: 2025-05-27T17:01:18.630619Z INFO Daemon Fetch goal state completed May 27 17:01:18.641417 waagent[2007]: 2025-05-27T17:01:18.641378Z INFO Daemon Daemon Starting provisioning May 27 17:01:18.645345 waagent[2007]: 2025-05-27T17:01:18.645292Z INFO Daemon Daemon Handle ovf-env.xml. May 27 17:01:18.649059 waagent[2007]: 2025-05-27T17:01:18.649018Z INFO Daemon Daemon Set hostname [ci-4344.0.0-a-efe79b1159] May 27 17:01:18.687372 waagent[2007]: 2025-05-27T17:01:18.686746Z INFO Daemon Daemon Publish hostname [ci-4344.0.0-a-efe79b1159] May 27 17:01:18.691615 waagent[2007]: 2025-05-27T17:01:18.691553Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 27 17:01:18.696070 waagent[2007]: 2025-05-27T17:01:18.696021Z INFO Daemon Daemon Primary interface is [eth0] May 27 17:01:18.706543 systemd-networkd[1668]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:01:18.706551 systemd-networkd[1668]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:01:18.706585 systemd-networkd[1668]: eth0: DHCP lease lost May 27 17:01:18.707470 waagent[2007]: 2025-05-27T17:01:18.707401Z INFO Daemon Daemon Create user account if not exists May 27 17:01:18.711438 waagent[2007]: 2025-05-27T17:01:18.711386Z INFO Daemon Daemon User core already exists, skip useradd May 27 17:01:18.715665 waagent[2007]: 2025-05-27T17:01:18.715596Z INFO Daemon Daemon Configure sudoer May 27 17:01:18.722239 waagent[2007]: 2025-05-27T17:01:18.722168Z INFO Daemon Daemon Configure sshd May 27 17:01:18.728968 waagent[2007]: 2025-05-27T17:01:18.728896Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 27 17:01:18.738074 waagent[2007]: 2025-05-27T17:01:18.738024Z INFO Daemon Daemon Deploy ssh public key. May 27 17:01:18.738427 systemd-networkd[1668]: eth0: DHCPv4 address 10.200.20.45/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:01:19.854096 waagent[2007]: 2025-05-27T17:01:19.854045Z INFO Daemon Daemon Provisioning complete May 27 17:01:19.867870 waagent[2007]: 2025-05-27T17:01:19.867813Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 27 17:01:19.872637 waagent[2007]: 2025-05-27T17:01:19.872574Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 27 17:01:19.879577 waagent[2007]: 2025-05-27T17:01:19.879525Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 27 17:01:19.985938 waagent[2102]: 2025-05-27T17:01:19.985423Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 27 17:01:19.985938 waagent[2102]: 2025-05-27T17:01:19.985573Z INFO ExtHandler ExtHandler OS: flatcar 4344.0.0 May 27 17:01:19.985938 waagent[2102]: 2025-05-27T17:01:19.985612Z INFO ExtHandler ExtHandler Python: 3.11.12 May 27 17:01:19.985938 waagent[2102]: 2025-05-27T17:01:19.985649Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 27 17:01:20.039100 waagent[2102]: 2025-05-27T17:01:20.039029Z INFO ExtHandler ExtHandler Distro: flatcar-4344.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 27 17:01:20.039441 waagent[2102]: 2025-05-27T17:01:20.039404Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:01:20.039576 waagent[2102]: 2025-05-27T17:01:20.039553Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:01:20.046070 waagent[2102]: 2025-05-27T17:01:20.045998Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:01:20.052406 waagent[2102]: 2025-05-27T17:01:20.051920Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 27 17:01:20.052514 waagent[2102]: 2025-05-27T17:01:20.052442Z INFO ExtHandler May 27 17:01:20.052528 waagent[2102]: 2025-05-27T17:01:20.052508Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d23a6f67-e8ea-49e3-84d6-9e8ee6d5d073 eTag: 13950928609707167658 source: Fabric] May 27 17:01:20.052769 waagent[2102]: 2025-05-27T17:01:20.052738Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 27 17:01:20.053187 waagent[2102]: 2025-05-27T17:01:20.053157Z INFO ExtHandler May 27 17:01:20.053223 waagent[2102]: 2025-05-27T17:01:20.053208Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 27 17:01:20.057252 waagent[2102]: 2025-05-27T17:01:20.057220Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 27 17:01:20.125133 waagent[2102]: 2025-05-27T17:01:20.124990Z INFO ExtHandler Downloaded certificate {'thumbprint': '5D9CB40FDC3A02CFCA1BE09A71C7177068182B1C', 'hasPrivateKey': False} May 27 17:01:20.125468 waagent[2102]: 2025-05-27T17:01:20.125430Z INFO ExtHandler Downloaded certificate {'thumbprint': '0880EF3705AF53619EA7B7D54DF7992BBC62DE68', 'hasPrivateKey': True} May 27 17:01:20.125818 waagent[2102]: 2025-05-27T17:01:20.125785Z INFO ExtHandler Fetch goal state completed May 27 17:01:20.138980 waagent[2102]: 2025-05-27T17:01:20.138910Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 27 17:01:20.144481 waagent[2102]: 2025-05-27T17:01:20.144408Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2102 May 27 17:01:20.144612 waagent[2102]: 2025-05-27T17:01:20.144581Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 27 17:01:20.144901 waagent[2102]: 2025-05-27T17:01:20.144870Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 27 17:01:20.146084 waagent[2102]: 2025-05-27T17:01:20.146043Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 27 17:01:20.146467 waagent[2102]: 2025-05-27T17:01:20.146434Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 27 17:01:20.146605 waagent[2102]: 2025-05-27T17:01:20.146580Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 27 17:01:20.147055 waagent[2102]: 2025-05-27T17:01:20.147024Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 27 17:01:20.196068 waagent[2102]: 2025-05-27T17:01:20.196027Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 27 17:01:20.196267 waagent[2102]: 2025-05-27T17:01:20.196240Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 27 17:01:20.201568 waagent[2102]: 2025-05-27T17:01:20.201526Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 27 17:01:20.207133 systemd[1]: Reload requested from client PID 2119 ('systemctl') (unit waagent.service)... May 27 17:01:20.207516 systemd[1]: Reloading... May 27 17:01:20.289376 zram_generator::config[2156]: No configuration found. May 27 17:01:20.356844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:01:20.440783 systemd[1]: Reloading finished in 232 ms. May 27 17:01:20.473369 waagent[2102]: 2025-05-27T17:01:20.473261Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 27 17:01:20.473484 waagent[2102]: 2025-05-27T17:01:20.473456Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 27 17:01:22.216212 waagent[2102]: 2025-05-27T17:01:22.216128Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 27 17:01:22.216555 waagent[2102]: 2025-05-27T17:01:22.216490Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 27 17:01:22.217251 waagent[2102]: 2025-05-27T17:01:22.217176Z INFO ExtHandler ExtHandler Starting env monitor service. May 27 17:01:22.217577 waagent[2102]: 2025-05-27T17:01:22.217529Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 27 17:01:22.218043 waagent[2102]: 2025-05-27T17:01:22.217934Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 27 17:01:22.218169 waagent[2102]: 2025-05-27T17:01:22.218142Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:01:22.218284 waagent[2102]: 2025-05-27T17:01:22.218258Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:01:22.218366 waagent[2102]: 2025-05-27T17:01:22.218335Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:01:22.218469 waagent[2102]: 2025-05-27T17:01:22.218450Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:01:22.218782 waagent[2102]: 2025-05-27T17:01:22.218742Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 27 17:01:22.218901 waagent[2102]: 2025-05-27T17:01:22.218795Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 27 17:01:22.219274 waagent[2102]: 2025-05-27T17:01:22.219233Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 27 17:01:22.219502 waagent[2102]: 2025-05-27T17:01:22.219463Z INFO EnvHandler ExtHandler Configure routes May 27 17:01:22.219539 waagent[2102]: 2025-05-27T17:01:22.219504Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 27 17:01:22.220035 waagent[2102]: 2025-05-27T17:01:22.220005Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 27 17:01:22.220141 waagent[2102]: 2025-05-27T17:01:22.220120Z INFO EnvHandler ExtHandler Gateway:None May 27 17:01:22.220358 waagent[2102]: 2025-05-27T17:01:22.220313Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 27 17:01:22.220358 waagent[2102]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 27 17:01:22.220358 waagent[2102]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 27 17:01:22.220358 waagent[2102]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 27 17:01:22.220358 waagent[2102]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 27 17:01:22.220358 waagent[2102]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:01:22.220358 waagent[2102]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:01:22.220581 waagent[2102]: 2025-05-27T17:01:22.220551Z INFO EnvHandler ExtHandler Routes:None May 27 17:01:22.230369 waagent[2102]: 2025-05-27T17:01:22.229820Z INFO ExtHandler ExtHandler May 27 17:01:22.230369 waagent[2102]: 2025-05-27T17:01:22.229901Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6c008872-f78e-4674-92d2-3368b5f7cc3d correlation bb8109fc-b342-4d26-92b2-acd93d7b55c4 created: 2025-05-27T17:00:02.965515Z] May 27 17:01:22.230369 waagent[2102]: 2025-05-27T17:01:22.230195Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 27 17:01:22.231017 waagent[2102]: 2025-05-27T17:01:22.230982Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 27 17:01:22.318951 waagent[2102]: 2025-05-27T17:01:22.318894Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 27 17:01:22.318951 waagent[2102]: Try `iptables -h' or 'iptables --help' for more information.) May 27 17:01:22.319570 waagent[2102]: 2025-05-27T17:01:22.319532Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F75C7881-F837-4971-B2DF-9CF6BF2D3426;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 27 17:01:22.332256 waagent[2102]: 2025-05-27T17:01:22.332198Z INFO MonitorHandler ExtHandler Network interfaces: May 27 17:01:22.332256 waagent[2102]: Executing ['ip', '-a', '-o', 'link']: May 27 17:01:22.332256 waagent[2102]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 27 17:01:22.332256 waagent[2102]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:9a:b3 brd ff:ff:ff:ff:ff:ff May 27 17:01:22.332256 waagent[2102]: 3: enP27630s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:9a:b3 brd ff:ff:ff:ff:ff:ff\ altname enP27630p0s2 May 27 17:01:22.332256 waagent[2102]: Executing ['ip', '-4', '-a', '-o', 'address']: May 27 17:01:22.332256 waagent[2102]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 27 17:01:22.332256 waagent[2102]: 2: eth0 inet 10.200.20.45/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 27 17:01:22.332256 waagent[2102]: Executing ['ip', '-6', '-a', '-o', 'address']: May 27 17:01:22.332256 waagent[2102]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 27 17:01:22.332256 waagent[2102]: 2: eth0 inet6 fe80::222:48ff:feb8:9ab3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:01:22.332256 waagent[2102]: 3: enP27630s1 inet6 fe80::222:48ff:feb8:9ab3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:01:22.396917 waagent[2102]: 2025-05-27T17:01:22.396858Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 27 17:01:22.396917 waagent[2102]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:01:22.396917 waagent[2102]: pkts bytes target prot opt in out source destination May 27 17:01:22.396917 waagent[2102]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:01:22.396917 waagent[2102]: pkts bytes target prot opt in out source destination May 27 17:01:22.396917 waagent[2102]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:01:22.396917 waagent[2102]: pkts bytes target prot opt in out source destination May 27 17:01:22.396917 waagent[2102]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:01:22.396917 waagent[2102]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:01:22.396917 waagent[2102]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:01:22.400040 waagent[2102]: 2025-05-27T17:01:22.399693Z INFO EnvHandler ExtHandler Current Firewall rules: May 27 17:01:22.400040 waagent[2102]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:01:22.400040 waagent[2102]: pkts bytes target prot opt in out source destination May 27 17:01:22.400040 waagent[2102]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:01:22.400040 waagent[2102]: pkts bytes target prot opt in out source destination May 27 17:01:22.400040 waagent[2102]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:01:22.400040 waagent[2102]: pkts bytes target prot opt in out source destination May 27 17:01:22.400040 waagent[2102]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:01:22.400040 waagent[2102]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:01:22.400040 waagent[2102]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:01:22.400040 waagent[2102]: 2025-05-27T17:01:22.399948Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 27 17:01:26.895813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:01:26.897279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:27.016910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:01:27.023422 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:01:27.147361 kubelet[2252]: E0527 17:01:27.147225 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:01:27.150040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:01:27.150159 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:01:27.150710 systemd[1]: kubelet.service: Consumed 119ms CPU time, 106.5M memory peak. May 27 17:01:37.400732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:01:37.402440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:37.504808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:01:37.507709 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:01:37.534708 kubelet[2267]: E0527 17:01:37.534645 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:01:37.537261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:01:37.537552 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:01:37.538191 systemd[1]: kubelet.service: Consumed 110ms CPU time, 104.9M memory peak. May 27 17:01:39.086830 chronyd[1844]: Selected source PHC0 May 27 17:01:47.769483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 17:01:47.771160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:48.235804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:01:48.246637 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:01:48.274011 kubelet[2281]: E0527 17:01:48.273916 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:01:48.276427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:01:48.276557 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:01:48.277059 systemd[1]: kubelet.service: Consumed 113ms CPU time, 107.4M memory peak. May 27 17:01:50.280124 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:01:50.283595 systemd[1]: Started sshd@0-10.200.20.45:22-10.200.16.10:43340.service - OpenSSH per-connection server daemon (10.200.16.10:43340). May 27 17:01:52.059547 sshd[2288]: Accepted publickey for core from 10.200.16.10 port 43340 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:01:52.060740 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:52.065399 systemd-logind[1856]: New session 3 of user core. May 27 17:01:52.074549 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:01:52.481444 systemd[1]: Started sshd@1-10.200.20.45:22-10.200.16.10:43344.service - OpenSSH per-connection server daemon (10.200.16.10:43344). May 27 17:01:52.966992 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 43344 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:01:52.968405 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:52.972975 systemd-logind[1856]: New session 4 of user core. May 27 17:01:52.978519 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:01:53.312949 sshd[2295]: Connection closed by 10.200.16.10 port 43344 May 27 17:01:53.313630 sshd-session[2293]: pam_unix(sshd:session): session closed for user core May 27 17:01:53.317250 systemd[1]: sshd@1-10.200.20.45:22-10.200.16.10:43344.service: Deactivated successfully. May 27 17:01:53.318954 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:01:53.320242 systemd-logind[1856]: Session 4 logged out. Waiting for processes to exit. May 27 17:01:53.321332 systemd-logind[1856]: Removed session 4. May 27 17:01:53.408677 systemd[1]: Started sshd@2-10.200.20.45:22-10.200.16.10:43346.service - OpenSSH per-connection server daemon (10.200.16.10:43346). May 27 17:01:53.896214 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 43346 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:01:53.897466 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:53.901821 systemd-logind[1856]: New session 5 of user core. May 27 17:01:53.911540 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:01:54.247940 sshd[2303]: Connection closed by 10.200.16.10 port 43346 May 27 17:01:54.248660 sshd-session[2301]: pam_unix(sshd:session): session closed for user core May 27 17:01:54.252094 systemd-logind[1856]: Session 5 logged out. Waiting for processes to exit. May 27 17:01:54.252835 systemd[1]: sshd@2-10.200.20.45:22-10.200.16.10:43346.service: Deactivated successfully. May 27 17:01:54.254812 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:01:54.256197 systemd-logind[1856]: Removed session 5. May 27 17:01:54.341428 systemd[1]: Started sshd@3-10.200.20.45:22-10.200.16.10:43354.service - OpenSSH per-connection server daemon (10.200.16.10:43354). May 27 17:01:54.830034 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 43354 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:01:54.831619 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:54.835668 systemd-logind[1856]: New session 6 of user core. May 27 17:01:54.854521 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:01:55.175614 sshd[2311]: Connection closed by 10.200.16.10 port 43354 May 27 17:01:55.176110 sshd-session[2309]: pam_unix(sshd:session): session closed for user core May 27 17:01:55.179514 systemd[1]: sshd@3-10.200.20.45:22-10.200.16.10:43354.service: Deactivated successfully. May 27 17:01:55.181307 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:01:55.182182 systemd-logind[1856]: Session 6 logged out. Waiting for processes to exit. May 27 17:01:55.183898 systemd-logind[1856]: Removed session 6. May 27 17:01:55.272652 systemd[1]: Started sshd@4-10.200.20.45:22-10.200.16.10:43358.service - OpenSSH per-connection server daemon (10.200.16.10:43358). May 27 17:01:55.766906 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 43358 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:01:55.768118 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:55.771990 systemd-logind[1856]: New session 7 of user core. May 27 17:01:55.779504 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:01:56.168934 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:01:56.169177 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:56.196265 sudo[2320]: pam_unix(sudo:session): session closed for user root May 27 17:01:56.280464 sshd[2319]: Connection closed by 10.200.16.10 port 43358 May 27 17:01:56.281290 sshd-session[2317]: pam_unix(sshd:session): session closed for user core May 27 17:01:56.285063 systemd-logind[1856]: Session 7 logged out. Waiting for processes to exit. May 27 17:01:56.285673 systemd[1]: sshd@4-10.200.20.45:22-10.200.16.10:43358.service: Deactivated successfully. May 27 17:01:56.287122 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:01:56.293632 systemd-logind[1856]: Removed session 7. May 27 17:01:56.366599 systemd[1]: Started sshd@5-10.200.20.45:22-10.200.16.10:43370.service - OpenSSH per-connection server daemon (10.200.16.10:43370). May 27 17:01:56.822696 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 43370 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:01:56.823969 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:56.828169 systemd-logind[1856]: New session 8 of user core. May 27 17:01:56.837551 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:01:57.079058 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:01:57.079301 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:57.085835 sudo[2330]: pam_unix(sudo:session): session closed for user root May 27 17:01:57.090087 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:01:57.090697 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:57.098965 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:01:57.128198 augenrules[2352]: No rules May 27 17:01:57.129614 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:01:57.130401 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:01:57.131775 sudo[2329]: pam_unix(sudo:session): session closed for user root May 27 17:01:57.219184 sshd[2328]: Connection closed by 10.200.16.10 port 43370 May 27 17:01:57.219562 sshd-session[2326]: pam_unix(sshd:session): session closed for user core May 27 17:01:57.223120 systemd[1]: sshd@5-10.200.20.45:22-10.200.16.10:43370.service: Deactivated successfully. May 27 17:01:57.224621 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:01:57.226004 systemd-logind[1856]: Session 8 logged out. Waiting for processes to exit. May 27 17:01:57.227189 systemd-logind[1856]: Removed session 8. May 27 17:01:57.306370 systemd[1]: Started sshd@6-10.200.20.45:22-10.200.16.10:43374.service - OpenSSH per-connection server daemon (10.200.16.10:43374). May 27 17:01:57.765264 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 43374 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:01:57.766521 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:57.770802 systemd-logind[1856]: New session 9 of user core. May 27 17:01:57.781765 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:01:58.024067 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:01:58.024729 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:58.519122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 17:01:58.520550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:58.673298 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 27 17:02:00.807265 update_engine[1859]: I20250527 17:02:00.807154 1859 update_attempter.cc:509] Updating boot flags... May 27 17:02:01.985420 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:02:01.990650 (dockerd)[2553]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:02:04.708039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:04.719713 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:02:04.748127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:02:04.821045 kubelet[2563]: E0527 17:02:04.746636 2563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:02:04.748230 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:02:04.748803 systemd[1]: kubelet.service: Consumed 117ms CPU time, 107M memory peak. May 27 17:02:04.832219 dockerd[2553]: time="2025-05-27T17:02:04.831955443Z" level=info msg="Starting up" May 27 17:02:04.869236 dockerd[2553]: time="2025-05-27T17:02:04.833427987Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:02:06.128723 dockerd[2553]: time="2025-05-27T17:02:06.128508799Z" level=info msg="Loading containers: start." May 27 17:02:06.224592 kernel: Initializing XFRM netlink socket May 27 17:02:06.720672 systemd-networkd[1668]: docker0: Link UP May 27 17:02:06.774223 dockerd[2553]: time="2025-05-27T17:02:06.774106365Z" level=info msg="Loading containers: done." May 27 17:02:06.785505 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3502128444-merged.mount: Deactivated successfully. May 27 17:02:07.418293 dockerd[2553]: time="2025-05-27T17:02:07.418216850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:02:07.418664 dockerd[2553]: time="2025-05-27T17:02:07.418324830Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:02:07.418664 dockerd[2553]: time="2025-05-27T17:02:07.418484099Z" level=info msg="Initializing buildkit" May 27 17:02:07.622174 dockerd[2553]: time="2025-05-27T17:02:07.622124521Z" level=info msg="Completed buildkit initialization" May 27 17:02:07.627775 dockerd[2553]: time="2025-05-27T17:02:07.627725920Z" level=info msg="Daemon has completed initialization" May 27 17:02:07.627775 dockerd[2553]: time="2025-05-27T17:02:07.627820292Z" level=info msg="API listen on /run/docker.sock" May 27 17:02:07.628041 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:02:08.178391 containerd[1886]: time="2025-05-27T17:02:08.178350651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 17:02:09.996787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068428801.mount: Deactivated successfully. May 27 17:02:14.769205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 27 17:02:14.771135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:14.869001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:14.874856 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:02:14.989135 kubelet[2788]: E0527 17:02:14.989039 2788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:02:14.991473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:02:14.991587 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:02:14.991876 systemd[1]: kubelet.service: Consumed 113ms CPU time, 107M memory peak. May 27 17:02:17.711669 containerd[1886]: time="2025-05-27T17:02:17.711608683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:17.715705 containerd[1886]: time="2025-05-27T17:02:17.715503747Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349350" May 27 17:02:17.720533 containerd[1886]: time="2025-05-27T17:02:17.720503013Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:17.724711 containerd[1886]: time="2025-05-27T17:02:17.724668893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:17.725303 containerd[1886]: time="2025-05-27T17:02:17.725090610Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 9.546704214s" May 27 17:02:17.725303 containerd[1886]: time="2025-05-27T17:02:17.725124915Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 27 17:02:17.726449 containerd[1886]: time="2025-05-27T17:02:17.726410275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 17:02:19.356114 containerd[1886]: time="2025-05-27T17:02:19.355490362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:19.361788 containerd[1886]: time="2025-05-27T17:02:19.361748090Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531735" May 27 17:02:19.366840 containerd[1886]: time="2025-05-27T17:02:19.366805139Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:19.381636 containerd[1886]: time="2025-05-27T17:02:19.381537753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:19.382174 containerd[1886]: time="2025-05-27T17:02:19.381998528Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 1.655459145s" May 27 17:02:19.382174 containerd[1886]: time="2025-05-27T17:02:19.382030137Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 27 17:02:19.382607 containerd[1886]: time="2025-05-27T17:02:19.382571546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 17:02:20.615389 containerd[1886]: time="2025-05-27T17:02:20.614830473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:20.617727 containerd[1886]: time="2025-05-27T17:02:20.617671571Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293731" May 27 17:02:20.629194 containerd[1886]: time="2025-05-27T17:02:20.629016941Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:20.639972 containerd[1886]: time="2025-05-27T17:02:20.639891376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:20.640900 containerd[1886]: time="2025-05-27T17:02:20.640863759Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.258172905s" May 27 17:02:20.641027 containerd[1886]: time="2025-05-27T17:02:20.641013276Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 27 17:02:20.641682 containerd[1886]: time="2025-05-27T17:02:20.641608311Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 17:02:25.019639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 27 17:02:25.021650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:25.120727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:25.126684 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:02:25.155947 kubelet[2847]: E0527 17:02:25.155875 2847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:02:25.158361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:02:25.158638 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:02:25.159270 systemd[1]: kubelet.service: Consumed 112ms CPU time, 105.2M memory peak. May 27 17:02:29.231970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949869797.mount: Deactivated successfully. May 27 17:02:29.929002 containerd[1886]: time="2025-05-27T17:02:29.928939645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:29.977552 containerd[1886]: time="2025-05-27T17:02:29.977498713Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196004" May 27 17:02:29.985991 containerd[1886]: time="2025-05-27T17:02:29.985949708Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:30.020647 containerd[1886]: time="2025-05-27T17:02:30.020565046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:30.021504 containerd[1886]: time="2025-05-27T17:02:30.021071137Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 9.379254116s" May 27 17:02:30.021504 containerd[1886]: time="2025-05-27T17:02:30.021111978Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 27 17:02:30.021664 containerd[1886]: time="2025-05-27T17:02:30.021639382Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 17:02:35.269186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 27 17:02:35.270696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:36.923760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:36.929643 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:02:36.957818 kubelet[2870]: E0527 17:02:36.957760 2870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:02:36.960282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:02:36.960573 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:02:36.961155 systemd[1]: kubelet.service: Consumed 116ms CPU time, 104.7M memory peak. May 27 17:02:37.933519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547641731.mount: Deactivated successfully. May 27 17:02:39.200712 containerd[1886]: time="2025-05-27T17:02:39.200618333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:39.203815 containerd[1886]: time="2025-05-27T17:02:39.203773205Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" May 27 17:02:39.209725 containerd[1886]: time="2025-05-27T17:02:39.209655439Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:39.216615 containerd[1886]: time="2025-05-27T17:02:39.216544608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:39.217119 containerd[1886]: time="2025-05-27T17:02:39.216898219Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 9.195230876s" May 27 17:02:39.217119 containerd[1886]: time="2025-05-27T17:02:39.216931788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 27 17:02:39.217362 containerd[1886]: time="2025-05-27T17:02:39.217330888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:02:40.278741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236188882.mount: Deactivated successfully. May 27 17:02:40.310723 containerd[1886]: time="2025-05-27T17:02:40.310667334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:02:40.319995 containerd[1886]: time="2025-05-27T17:02:40.319788131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 27 17:02:40.327045 containerd[1886]: time="2025-05-27T17:02:40.327017886Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:02:40.336284 containerd[1886]: time="2025-05-27T17:02:40.336209012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:02:40.337003 containerd[1886]: time="2025-05-27T17:02:40.336701787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.119335274s" May 27 17:02:40.337003 containerd[1886]: time="2025-05-27T17:02:40.336735188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 17:02:40.337267 containerd[1886]: time="2025-05-27T17:02:40.337212483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 17:02:47.019168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 27 17:02:47.021037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:48.710780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:48.716661 (kubelet)[2948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:02:48.745166 kubelet[2948]: E0527 17:02:48.745081 2948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:02:48.747305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:02:48.747444 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:02:48.747934 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.4M memory peak. May 27 17:02:51.975367 containerd[1886]: time="2025-05-27T17:02:51.974643122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:52.017432 containerd[1886]: time="2025-05-27T17:02:52.017382540Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230163" May 27 17:02:52.080733 containerd[1886]: time="2025-05-27T17:02:52.080683217Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:52.089602 containerd[1886]: time="2025-05-27T17:02:52.089520232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:52.090205 containerd[1886]: time="2025-05-27T17:02:52.090172302Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 11.752936403s" May 27 17:02:52.090205 containerd[1886]: time="2025-05-27T17:02:52.090206799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 27 17:02:54.197904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:54.198014 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.4M memory peak. May 27 17:02:54.199996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:54.226525 systemd[1]: Reload requested from client PID 2986 ('systemctl') (unit session-9.scope)... May 27 17:02:54.226686 systemd[1]: Reloading... May 27 17:02:54.304526 zram_generator::config[3029]: No configuration found. May 27 17:02:54.387098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:02:54.472471 systemd[1]: Reloading finished in 245 ms. May 27 17:02:54.531898 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:02:54.532126 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:02:54.532450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:54.536065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:56.546907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:56.555674 (kubelet)[3096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:02:56.582060 kubelet[3096]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:02:56.582060 kubelet[3096]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:02:56.582060 kubelet[3096]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:02:56.582467 kubelet[3096]: I0527 17:02:56.582093 3096 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:02:57.485135 kubelet[3096]: I0527 17:02:57.485087 3096 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:02:57.485135 kubelet[3096]: I0527 17:02:57.485125 3096 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:02:57.486126 kubelet[3096]: I0527 17:02:57.485319 3096 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:02:57.532383 kubelet[3096]: E0527 17:02:57.530591 3096 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:02:57.532383 kubelet[3096]: I0527 17:02:57.531813 3096 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:02:57.538179 kubelet[3096]: I0527 17:02:57.538155 3096 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:02:57.541042 kubelet[3096]: I0527 17:02:57.541013 3096 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:02:57.541441 kubelet[3096]: I0527 17:02:57.541415 3096 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:02:57.541644 kubelet[3096]: I0527 17:02:57.541514 3096 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-efe79b1159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:02:57.541779 kubelet[3096]: I0527 17:02:57.541766 3096 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:02:57.541832 kubelet[3096]: I0527 17:02:57.541825 3096 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:02:57.542012 kubelet[3096]: I0527 17:02:57.541999 3096 state_mem.go:36] "Initialized new in-memory state store" May 27 17:02:57.543710 kubelet[3096]: I0527 17:02:57.543689 3096 kubelet.go:480] "Attempting to sync node with API server" May 27 17:02:57.543819 kubelet[3096]: I0527 17:02:57.543807 3096 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:02:57.543908 kubelet[3096]: I0527 17:02:57.543900 3096 kubelet.go:386] "Adding apiserver pod source" May 27 17:02:57.544796 kubelet[3096]: I0527 17:02:57.544779 3096 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:02:57.546887 kubelet[3096]: E0527 17:02:57.546863 3096 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-efe79b1159&limit=500&resourceVersion=0\": dial tcp 10.200.20.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:02:57.548134 kubelet[3096]: E0527 17:02:57.548100 3096 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:02:57.548204 kubelet[3096]: I0527 17:02:57.548194 3096 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:02:57.548607 kubelet[3096]: I0527 17:02:57.548589 3096 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:02:57.548656 kubelet[3096]: W0527 17:02:57.548646 3096 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:02:57.550390 kubelet[3096]: I0527 17:02:57.550372 3096 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:02:57.550454 kubelet[3096]: I0527 17:02:57.550412 3096 server.go:1289] "Started kubelet" May 27 17:02:57.551209 kubelet[3096]: I0527 17:02:57.551174 3096 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:02:57.551927 kubelet[3096]: I0527 17:02:57.551896 3096 server.go:317] "Adding debug handlers to kubelet server" May 27 17:02:57.552047 kubelet[3096]: I0527 17:02:57.552034 3096 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:02:57.554815 kubelet[3096]: I0527 17:02:57.554763 3096 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:02:57.555468 kubelet[3096]: I0527 17:02:57.555214 3096 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:02:57.555760 kubelet[3096]: I0527 17:02:57.555745 3096 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:02:57.555937 kubelet[3096]: I0527 17:02:57.555926 3096 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:02:57.556462 kubelet[3096]: E0527 17:02:57.556437 3096 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-efe79b1159\" not found" May 27 17:02:57.559904 kubelet[3096]: I0527 17:02:57.559880 3096 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:02:57.560056 kubelet[3096]: I0527 17:02:57.560048 3096 reconciler.go:26] "Reconciler: start to sync state" May 27 17:02:57.563792 kubelet[3096]: E0527 17:02:57.563763 3096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-efe79b1159?timeout=10s\": dial tcp 10.200.20.45:6443: connect: connection refused" interval="200ms" May 27 17:02:57.563993 kubelet[3096]: E0527 17:02:57.563961 3096 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:02:57.564378 kubelet[3096]: E0527 17:02:57.561919 3096 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.45:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-a-efe79b1159.1843710c327b86c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-a-efe79b1159,UID:ci-4344.0.0-a-efe79b1159,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-a-efe79b1159,},FirstTimestamp:2025-05-27 17:02:57.550386886 +0000 UTC m=+0.990971664,LastTimestamp:2025-05-27 17:02:57.550386886 +0000 UTC m=+0.990971664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-a-efe79b1159,}" May 27 17:02:57.565828 kubelet[3096]: I0527 17:02:57.564900 3096 factory.go:223] Registration of the systemd container factory successfully May 27 17:02:57.565828 kubelet[3096]: I0527 17:02:57.564973 3096 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:02:57.566832 kubelet[3096]: I0527 17:02:57.566816 3096 factory.go:223] Registration of the containerd container factory successfully May 27 17:02:57.575775 kubelet[3096]: E0527 17:02:57.575735 3096 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:02:57.579115 kubelet[3096]: I0527 17:02:57.579040 3096 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:02:57.579115 kubelet[3096]: I0527 17:02:57.579058 3096 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:02:57.579115 kubelet[3096]: I0527 17:02:57.579077 3096 state_mem.go:36] "Initialized new in-memory state store" May 27 17:02:57.657598 kubelet[3096]: E0527 17:02:57.657551 3096 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-efe79b1159\" not found" May 27 17:02:57.758076 kubelet[3096]: E0527 17:02:57.757946 3096 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-efe79b1159\" not found" May 27 17:02:57.764830 kubelet[3096]: E0527 17:02:57.764782 3096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-efe79b1159?timeout=10s\": dial tcp 10.200.20.45:6443: connect: connection refused" interval="400ms" May 27 17:02:57.858337 kubelet[3096]: E0527 17:02:57.858288 3096 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-efe79b1159\" not found" May 27 17:02:57.880216 kubelet[3096]: I0527 17:02:57.880170 3096 policy_none.go:49] "None policy: Start" May 27 17:02:57.880216 kubelet[3096]: I0527 17:02:57.880210 3096 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:02:57.880216 kubelet[3096]: I0527 17:02:57.880229 3096 state_mem.go:35] "Initializing new in-memory state store" May 27 17:02:57.889815 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:02:57.898999 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:02:57.902543 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:02:57.917184 kubelet[3096]: E0527 17:02:57.917154 3096 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:02:57.917778 kubelet[3096]: I0527 17:02:57.917372 3096 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:02:57.917778 kubelet[3096]: I0527 17:02:57.917386 3096 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:02:57.917778 kubelet[3096]: I0527 17:02:57.917620 3096 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:02:57.920510 kubelet[3096]: E0527 17:02:57.920481 3096 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:02:57.920817 kubelet[3096]: E0527 17:02:57.920521 3096 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-a-efe79b1159\" not found" May 27 17:02:57.951610 kubelet[3096]: I0527 17:02:57.951558 3096 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:02:57.952876 kubelet[3096]: I0527 17:02:57.952832 3096 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:02:57.952876 kubelet[3096]: I0527 17:02:57.952860 3096 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:02:57.953167 kubelet[3096]: I0527 17:02:57.953106 3096 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:02:57.953167 kubelet[3096]: I0527 17:02:57.953117 3096 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:02:57.953289 kubelet[3096]: E0527 17:02:57.953157 3096 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 27 17:02:57.954085 kubelet[3096]: E0527 17:02:57.953804 3096 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:02:58.019570 kubelet[3096]: I0527 17:02:58.019446 3096 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.020070 kubelet[3096]: E0527 17:02:58.020034 3096 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.45:6443/api/v1/nodes\": dial tcp 10.200.20.45:6443: connect: connection refused" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.063461 kubelet[3096]: I0527 17:02:58.063386 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6dea6871a880e77ff4a908a4e552067f-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-efe79b1159\" (UID: \"6dea6871a880e77ff4a908a4e552067f\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.071407 systemd[1]: Created slice kubepods-burstable-pod6dea6871a880e77ff4a908a4e552067f.slice - libcontainer container kubepods-burstable-pod6dea6871a880e77ff4a908a4e552067f.slice. May 27 17:02:58.078062 kubelet[3096]: E0527 17:02:58.077924 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.081671 systemd[1]: Created slice kubepods-burstable-podc09dfb2c3dcf8e9b59790b18757a8a84.slice - libcontainer container kubepods-burstable-podc09dfb2c3dcf8e9b59790b18757a8a84.slice. May 27 17:02:58.091523 kubelet[3096]: E0527 17:02:58.091489 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.106588 systemd[1]: Created slice kubepods-burstable-pod8eee989c9e517f1f29a082cead3c3edf.slice - libcontainer container kubepods-burstable-pod8eee989c9e517f1f29a082cead3c3edf.slice. May 27 17:02:58.108436 kubelet[3096]: E0527 17:02:58.108394 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.163749 kubelet[3096]: I0527 17:02:58.163709 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.163749 kubelet[3096]: I0527 17:02:58.163750 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.163957 kubelet[3096]: I0527 17:02:58.163801 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.163957 kubelet[3096]: I0527 17:02:58.163813 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.163957 kubelet[3096]: I0527 17:02:58.163828 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c09dfb2c3dcf8e9b59790b18757a8a84-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" (UID: \"c09dfb2c3dcf8e9b59790b18757a8a84\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.163957 kubelet[3096]: I0527 17:02:58.163839 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c09dfb2c3dcf8e9b59790b18757a8a84-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" (UID: \"c09dfb2c3dcf8e9b59790b18757a8a84\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.163957 kubelet[3096]: I0527 17:02:58.163869 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.164042 kubelet[3096]: I0527 17:02:58.163881 3096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c09dfb2c3dcf8e9b59790b18757a8a84-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" (UID: \"c09dfb2c3dcf8e9b59790b18757a8a84\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:02:58.165299 kubelet[3096]: E0527 17:02:58.165252 3096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-efe79b1159?timeout=10s\": dial tcp 10.200.20.45:6443: connect: connection refused" interval="800ms" May 27 17:02:58.222012 kubelet[3096]: I0527 17:02:58.221961 3096 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.222353 kubelet[3096]: E0527 17:02:58.222320 3096 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.45:6443/api/v1/nodes\": dial tcp 10.200.20.45:6443: connect: connection refused" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.379942 containerd[1886]: time="2025-05-27T17:02:58.379532604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-efe79b1159,Uid:6dea6871a880e77ff4a908a4e552067f,Namespace:kube-system,Attempt:0,}" May 27 17:02:58.393875 containerd[1886]: time="2025-05-27T17:02:58.393721268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-efe79b1159,Uid:c09dfb2c3dcf8e9b59790b18757a8a84,Namespace:kube-system,Attempt:0,}" May 27 17:02:58.410308 containerd[1886]: time="2025-05-27T17:02:58.410132584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-efe79b1159,Uid:8eee989c9e517f1f29a082cead3c3edf,Namespace:kube-system,Attempt:0,}" May 27 17:02:58.497238 containerd[1886]: time="2025-05-27T17:02:58.496670193Z" level=info msg="connecting to shim e51ad1a6c6f35837e822f4859741b525c72b23f3dfb2586f2ecc38feb67d9239" address="unix:///run/containerd/s/29866d777c4585d344d41b2517dc931cfe1ee9bc68f6946997a1c6a4167f8c82" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:58.523549 systemd[1]: Started cri-containerd-e51ad1a6c6f35837e822f4859741b525c72b23f3dfb2586f2ecc38feb67d9239.scope - libcontainer container e51ad1a6c6f35837e822f4859741b525c72b23f3dfb2586f2ecc38feb67d9239. May 27 17:02:58.547308 kubelet[3096]: E0527 17:02:58.547267 3096 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:02:58.556287 containerd[1886]: time="2025-05-27T17:02:58.555879475Z" level=info msg="connecting to shim 054da20006d723515f9e5ea8c8657a26124a908ce0129a6985a0edc9c74d1e80" address="unix:///run/containerd/s/e28893114f2ed024f58bfeb755bc9565f58a4c16f042765a794241a93571f721" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:58.560117 containerd[1886]: time="2025-05-27T17:02:58.560074125Z" level=info msg="connecting to shim 7af48f86b312a2531251dbb937f6f9777c41c75cd19ff027dfec6deff9d024bb" address="unix:///run/containerd/s/66636c500258107ce90927174659c027604c0e4e7d0e5d531e4979072e3330f7" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:58.579860 containerd[1886]: time="2025-05-27T17:02:58.579712333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-efe79b1159,Uid:6dea6871a880e77ff4a908a4e552067f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51ad1a6c6f35837e822f4859741b525c72b23f3dfb2586f2ecc38feb67d9239\"" May 27 17:02:58.587525 systemd[1]: Started cri-containerd-7af48f86b312a2531251dbb937f6f9777c41c75cd19ff027dfec6deff9d024bb.scope - libcontainer container 7af48f86b312a2531251dbb937f6f9777c41c75cd19ff027dfec6deff9d024bb. May 27 17:02:58.594876 systemd[1]: Started cri-containerd-054da20006d723515f9e5ea8c8657a26124a908ce0129a6985a0edc9c74d1e80.scope - libcontainer container 054da20006d723515f9e5ea8c8657a26124a908ce0129a6985a0edc9c74d1e80. May 27 17:02:58.596131 containerd[1886]: time="2025-05-27T17:02:58.595487262Z" level=info msg="CreateContainer within sandbox \"e51ad1a6c6f35837e822f4859741b525c72b23f3dfb2586f2ecc38feb67d9239\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:02:58.624689 kubelet[3096]: I0527 17:02:58.624659 3096 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.625805 kubelet[3096]: E0527 17:02:58.625770 3096 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.45:6443/api/v1/nodes\": dial tcp 10.200.20.45:6443: connect: connection refused" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.635686 containerd[1886]: time="2025-05-27T17:02:58.635582536Z" level=info msg="Container a01342688def5484d653525e353a6b4abc8cd09c919c4e01746ca5ced30c2b7c: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:58.645413 containerd[1886]: time="2025-05-27T17:02:58.645317294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-efe79b1159,Uid:8eee989c9e517f1f29a082cead3c3edf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7af48f86b312a2531251dbb937f6f9777c41c75cd19ff027dfec6deff9d024bb\"" May 27 17:02:58.651315 containerd[1886]: time="2025-05-27T17:02:58.651241245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-efe79b1159,Uid:c09dfb2c3dcf8e9b59790b18757a8a84,Namespace:kube-system,Attempt:0,} returns sandbox id \"054da20006d723515f9e5ea8c8657a26124a908ce0129a6985a0edc9c74d1e80\"" May 27 17:02:58.653374 containerd[1886]: time="2025-05-27T17:02:58.652965291Z" level=info msg="CreateContainer within sandbox \"7af48f86b312a2531251dbb937f6f9777c41c75cd19ff027dfec6deff9d024bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:02:58.659645 containerd[1886]: time="2025-05-27T17:02:58.659608776Z" level=info msg="CreateContainer within sandbox \"054da20006d723515f9e5ea8c8657a26124a908ce0129a6985a0edc9c74d1e80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:02:58.674860 containerd[1886]: time="2025-05-27T17:02:58.674815504Z" level=info msg="CreateContainer within sandbox \"e51ad1a6c6f35837e822f4859741b525c72b23f3dfb2586f2ecc38feb67d9239\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a01342688def5484d653525e353a6b4abc8cd09c919c4e01746ca5ced30c2b7c\"" May 27 17:02:58.675767 containerd[1886]: time="2025-05-27T17:02:58.675737212Z" level=info msg="StartContainer for \"a01342688def5484d653525e353a6b4abc8cd09c919c4e01746ca5ced30c2b7c\"" May 27 17:02:58.676654 containerd[1886]: time="2025-05-27T17:02:58.676630952Z" level=info msg="connecting to shim a01342688def5484d653525e353a6b4abc8cd09c919c4e01746ca5ced30c2b7c" address="unix:///run/containerd/s/29866d777c4585d344d41b2517dc931cfe1ee9bc68f6946997a1c6a4167f8c82" protocol=ttrpc version=3 May 27 17:02:58.694534 systemd[1]: Started cri-containerd-a01342688def5484d653525e353a6b4abc8cd09c919c4e01746ca5ced30c2b7c.scope - libcontainer container a01342688def5484d653525e353a6b4abc8cd09c919c4e01746ca5ced30c2b7c. May 27 17:02:58.729143 containerd[1886]: time="2025-05-27T17:02:58.729093977Z" level=info msg="Container 0728e739839578fbc0701b6fa1d958d57dec6f6d68d36216440c1ccd7012235f: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:58.732046 containerd[1886]: time="2025-05-27T17:02:58.731962090Z" level=info msg="StartContainer for \"a01342688def5484d653525e353a6b4abc8cd09c919c4e01746ca5ced30c2b7c\" returns successfully" May 27 17:02:58.745260 containerd[1886]: time="2025-05-27T17:02:58.744583513Z" level=info msg="Container b7a09e695f26b711c30b315e632e8b9a97e228acffd829e36d5b169d4c49e655: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:58.761980 containerd[1886]: time="2025-05-27T17:02:58.761935690Z" level=info msg="CreateContainer within sandbox \"054da20006d723515f9e5ea8c8657a26124a908ce0129a6985a0edc9c74d1e80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0728e739839578fbc0701b6fa1d958d57dec6f6d68d36216440c1ccd7012235f\"" May 27 17:02:58.763117 kubelet[3096]: E0527 17:02:58.763082 3096 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:02:58.763408 containerd[1886]: time="2025-05-27T17:02:58.763225434Z" level=info msg="StartContainer for \"0728e739839578fbc0701b6fa1d958d57dec6f6d68d36216440c1ccd7012235f\"" May 27 17:02:58.765346 containerd[1886]: time="2025-05-27T17:02:58.764472889Z" level=info msg="connecting to shim 0728e739839578fbc0701b6fa1d958d57dec6f6d68d36216440c1ccd7012235f" address="unix:///run/containerd/s/e28893114f2ed024f58bfeb755bc9565f58a4c16f042765a794241a93571f721" protocol=ttrpc version=3 May 27 17:02:58.781716 systemd[1]: Started cri-containerd-0728e739839578fbc0701b6fa1d958d57dec6f6d68d36216440c1ccd7012235f.scope - libcontainer container 0728e739839578fbc0701b6fa1d958d57dec6f6d68d36216440c1ccd7012235f. May 27 17:02:58.782863 containerd[1886]: time="2025-05-27T17:02:58.782825306Z" level=info msg="CreateContainer within sandbox \"7af48f86b312a2531251dbb937f6f9777c41c75cd19ff027dfec6deff9d024bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7a09e695f26b711c30b315e632e8b9a97e228acffd829e36d5b169d4c49e655\"" May 27 17:02:58.784312 containerd[1886]: time="2025-05-27T17:02:58.784284775Z" level=info msg="StartContainer for \"b7a09e695f26b711c30b315e632e8b9a97e228acffd829e36d5b169d4c49e655\"" May 27 17:02:58.786354 containerd[1886]: time="2025-05-27T17:02:58.785636577Z" level=info msg="connecting to shim b7a09e695f26b711c30b315e632e8b9a97e228acffd829e36d5b169d4c49e655" address="unix:///run/containerd/s/66636c500258107ce90927174659c027604c0e4e7d0e5d531e4979072e3330f7" protocol=ttrpc version=3 May 27 17:02:58.810630 systemd[1]: Started cri-containerd-b7a09e695f26b711c30b315e632e8b9a97e228acffd829e36d5b169d4c49e655.scope - libcontainer container b7a09e695f26b711c30b315e632e8b9a97e228acffd829e36d5b169d4c49e655. May 27 17:02:58.851081 containerd[1886]: time="2025-05-27T17:02:58.851029138Z" level=info msg="StartContainer for \"0728e739839578fbc0701b6fa1d958d57dec6f6d68d36216440c1ccd7012235f\" returns successfully" May 27 17:02:58.878158 containerd[1886]: time="2025-05-27T17:02:58.878117738Z" level=info msg="StartContainer for \"b7a09e695f26b711c30b315e632e8b9a97e228acffd829e36d5b169d4c49e655\" returns successfully" May 27 17:02:58.966991 kubelet[3096]: E0527 17:02:58.966865 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.972271 kubelet[3096]: E0527 17:02:58.972080 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:58.974975 kubelet[3096]: E0527 17:02:58.974948 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:59.430375 kubelet[3096]: I0527 17:02:59.429295 3096 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:59.975683 kubelet[3096]: E0527 17:02:59.975652 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:02:59.975999 kubelet[3096]: E0527 17:02:59.975980 3096 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:03:00.375673 kubelet[3096]: E0527 17:03:00.375629 3096 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.0.0-a-efe79b1159\" not found" node="ci-4344.0.0-a-efe79b1159" May 27 17:03:00.548496 kubelet[3096]: I0527 17:03:00.548451 3096 apiserver.go:52] "Watching apiserver" May 27 17:03:00.560623 kubelet[3096]: I0527 17:03:00.560566 3096 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:03:00.573580 kubelet[3096]: I0527 17:03:00.573517 3096 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-efe79b1159" May 27 17:03:00.573580 kubelet[3096]: E0527 17:03:00.573562 3096 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.0.0-a-efe79b1159\": node \"ci-4344.0.0-a-efe79b1159\" not found" May 27 17:03:00.658616 kubelet[3096]: I0527 17:03:00.657791 3096 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-efe79b1159" May 27 17:03:00.743823 kubelet[3096]: E0527 17:03:00.743775 3096 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-efe79b1159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-a-efe79b1159" May 27 17:03:00.744049 kubelet[3096]: I0527 17:03:00.743989 3096 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:00.745927 kubelet[3096]: E0527 17:03:00.745893 3096 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:00.745927 kubelet[3096]: I0527 17:03:00.745924 3096 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:00.747731 kubelet[3096]: E0527 17:03:00.747703 3096 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:00.975448 kubelet[3096]: I0527 17:03:00.975222 3096 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:00.977432 kubelet[3096]: E0527 17:03:00.977364 3096 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.061741 systemd[1]: Reload requested from client PID 3377 ('systemctl') (unit session-9.scope)... May 27 17:03:03.061761 systemd[1]: Reloading... May 27 17:03:03.172393 zram_generator::config[3432]: No configuration found. May 27 17:03:03.238249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:03:03.334360 systemd[1]: Reloading finished in 272 ms. May 27 17:03:03.365423 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:03.376567 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:03:03.376831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:03.376904 systemd[1]: kubelet.service: Consumed 811ms CPU time, 124.4M memory peak. May 27 17:03:03.380541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:03.491183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:03.498847 (kubelet)[3487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:03:03.595848 kubelet[3487]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:03:03.595848 kubelet[3487]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:03:03.595848 kubelet[3487]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:03:03.595848 kubelet[3487]: I0527 17:03:03.595496 3487 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:03:03.601551 kubelet[3487]: I0527 17:03:03.601356 3487 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:03:03.601551 kubelet[3487]: I0527 17:03:03.601388 3487 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:03:03.601808 kubelet[3487]: I0527 17:03:03.601791 3487 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:03:03.604369 kubelet[3487]: I0527 17:03:03.603635 3487 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 17:03:03.607432 kubelet[3487]: I0527 17:03:03.607405 3487 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:03:03.613581 kubelet[3487]: I0527 17:03:03.613538 3487 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:03:03.617375 kubelet[3487]: I0527 17:03:03.616694 3487 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:03:03.617375 kubelet[3487]: I0527 17:03:03.616849 3487 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:03:03.617375 kubelet[3487]: I0527 17:03:03.616869 3487 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-efe79b1159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:03:03.617375 kubelet[3487]: I0527 17:03:03.617074 3487 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:03:03.617592 kubelet[3487]: I0527 17:03:03.617099 3487 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:03:03.617592 kubelet[3487]: I0527 17:03:03.617142 3487 state_mem.go:36] "Initialized new in-memory state store" May 27 17:03:03.617592 kubelet[3487]: I0527 17:03:03.617272 3487 kubelet.go:480] "Attempting to sync node with API server" May 27 17:03:03.617592 kubelet[3487]: I0527 17:03:03.617280 3487 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:03:03.617592 kubelet[3487]: I0527 17:03:03.617308 3487 kubelet.go:386] "Adding apiserver pod source" May 27 17:03:03.617592 kubelet[3487]: I0527 17:03:03.617320 3487 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:03:03.625602 kubelet[3487]: I0527 17:03:03.625571 3487 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:03:03.627540 kubelet[3487]: I0527 17:03:03.627504 3487 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:03:03.632469 kubelet[3487]: I0527 17:03:03.632448 3487 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:03:03.632686 kubelet[3487]: I0527 17:03:03.632677 3487 server.go:1289] "Started kubelet" May 27 17:03:03.634846 kubelet[3487]: I0527 17:03:03.634806 3487 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:03:03.636328 kubelet[3487]: I0527 17:03:03.635486 3487 server.go:317] "Adding debug handlers to kubelet server" May 27 17:03:03.636328 kubelet[3487]: I0527 17:03:03.635874 3487 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:03:03.638793 kubelet[3487]: I0527 17:03:03.638730 3487 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:03:03.638957 kubelet[3487]: I0527 17:03:03.638940 3487 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:03:03.640599 kubelet[3487]: I0527 17:03:03.640575 3487 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:03:03.644193 kubelet[3487]: I0527 17:03:03.643724 3487 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:03:03.644193 kubelet[3487]: I0527 17:03:03.643821 3487 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:03:03.645300 kubelet[3487]: I0527 17:03:03.645284 3487 reconciler.go:26] "Reconciler: start to sync state" May 27 17:03:03.646021 kubelet[3487]: E0527 17:03:03.646000 3487 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:03:03.646991 kubelet[3487]: I0527 17:03:03.646968 3487 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:03:03.647986 kubelet[3487]: I0527 17:03:03.647498 3487 factory.go:223] Registration of the systemd container factory successfully May 27 17:03:03.648205 kubelet[3487]: I0527 17:03:03.648183 3487 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:03:03.651593 kubelet[3487]: I0527 17:03:03.649823 3487 factory.go:223] Registration of the containerd container factory successfully May 27 17:03:03.658064 kubelet[3487]: I0527 17:03:03.658036 3487 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:03:03.658204 kubelet[3487]: I0527 17:03:03.658195 3487 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:03:03.658263 kubelet[3487]: I0527 17:03:03.658255 3487 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:03:03.658314 kubelet[3487]: I0527 17:03:03.658305 3487 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:03:03.658428 kubelet[3487]: E0527 17:03:03.658393 3487 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:03:03.693540 kubelet[3487]: I0527 17:03:03.693448 3487 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:03:03.693716 kubelet[3487]: I0527 17:03:03.693704 3487 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:03:03.693774 kubelet[3487]: I0527 17:03:03.693761 3487 state_mem.go:36] "Initialized new in-memory state store" May 27 17:03:03.693968 kubelet[3487]: I0527 17:03:03.693958 3487 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:03:03.694036 kubelet[3487]: I0527 17:03:03.694020 3487 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:03:03.694080 kubelet[3487]: I0527 17:03:03.694075 3487 policy_none.go:49] "None policy: Start" May 27 17:03:03.694131 kubelet[3487]: I0527 17:03:03.694125 3487 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:03:03.694174 kubelet[3487]: I0527 17:03:03.694169 3487 state_mem.go:35] "Initializing new in-memory state store" May 27 17:03:03.694312 kubelet[3487]: I0527 17:03:03.694305 3487 state_mem.go:75] "Updated machine memory state" May 27 17:03:03.698052 kubelet[3487]: E0527 17:03:03.698031 3487 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:03:03.698516 kubelet[3487]: I0527 17:03:03.698442 3487 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:03:03.698668 kubelet[3487]: I0527 17:03:03.698638 3487 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:03:03.699271 kubelet[3487]: I0527 17:03:03.699245 3487 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:03:03.700908 kubelet[3487]: E0527 17:03:03.700276 3487 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:03:03.759184 kubelet[3487]: I0527 17:03:03.759149 3487 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.759439 kubelet[3487]: I0527 17:03:03.759198 3487 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.759564 kubelet[3487]: I0527 17:03:03.759243 3487 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.771436 kubelet[3487]: I0527 17:03:03.771405 3487 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:03:03.790727 kubelet[3487]: I0527 17:03:03.790553 3487 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:03:03.790727 kubelet[3487]: I0527 17:03:03.790589 3487 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:03:03.808733 kubelet[3487]: I0527 17:03:03.808692 3487 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-efe79b1159" May 27 17:03:03.828533 kubelet[3487]: I0527 17:03:03.828459 3487 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-a-efe79b1159" May 27 17:03:03.828762 kubelet[3487]: I0527 17:03:03.828715 3487 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-efe79b1159" May 27 17:03:03.845616 kubelet[3487]: I0527 17:03:03.845575 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c09dfb2c3dcf8e9b59790b18757a8a84-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" (UID: \"c09dfb2c3dcf8e9b59790b18757a8a84\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846389 kubelet[3487]: I0527 17:03:03.845799 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846389 kubelet[3487]: I0527 17:03:03.845821 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846389 kubelet[3487]: I0527 17:03:03.845836 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846389 kubelet[3487]: I0527 17:03:03.845847 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846389 kubelet[3487]: I0527 17:03:03.845859 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6dea6871a880e77ff4a908a4e552067f-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-efe79b1159\" (UID: \"6dea6871a880e77ff4a908a4e552067f\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846532 kubelet[3487]: I0527 17:03:03.845869 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c09dfb2c3dcf8e9b59790b18757a8a84-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" (UID: \"c09dfb2c3dcf8e9b59790b18757a8a84\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846532 kubelet[3487]: I0527 17:03:03.845879 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8eee989c9e517f1f29a082cead3c3edf-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-efe79b1159\" (UID: \"8eee989c9e517f1f29a082cead3c3edf\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" May 27 17:03:03.846532 kubelet[3487]: I0527 17:03:03.845891 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c09dfb2c3dcf8e9b59790b18757a8a84-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-efe79b1159\" (UID: \"c09dfb2c3dcf8e9b59790b18757a8a84\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" May 27 17:03:08.167272 kubelet[3487]: I0527 17:03:04.618477 3487 apiserver.go:52] "Watching apiserver" May 27 17:03:08.167272 kubelet[3487]: I0527 17:03:04.644101 3487 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:03:08.167272 kubelet[3487]: I0527 17:03:04.704595 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-a-efe79b1159" podStartSLOduration=1.704561743 podStartE2EDuration="1.704561743s" podCreationTimestamp="2025-05-27 17:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:03:04.70440761 +0000 UTC m=+1.199431462" watchObservedRunningTime="2025-05-27 17:03:04.704561743 +0000 UTC m=+1.199585587" May 27 17:03:08.167272 kubelet[3487]: I0527 17:03:04.741428 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-a-efe79b1159" podStartSLOduration=1.741411611 podStartE2EDuration="1.741411611s" podCreationTimestamp="2025-05-27 17:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:03:04.727226275 +0000 UTC m=+1.222250119" watchObservedRunningTime="2025-05-27 17:03:04.741411611 +0000 UTC m=+1.236435455" May 27 17:03:08.167272 kubelet[3487]: I0527 17:03:04.741851 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-efe79b1159" podStartSLOduration=1.741841128 podStartE2EDuration="1.741841128s" podCreationTimestamp="2025-05-27 17:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:03:04.741569568 +0000 UTC m=+1.236593412" watchObservedRunningTime="2025-05-27 17:03:04.741841128 +0000 UTC m=+1.236864980" May 27 17:03:08.167272 kubelet[3487]: I0527 17:03:08.165161 3487 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:03:08.167736 containerd[1886]: time="2025-05-27T17:03:08.165616095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:03:08.167879 kubelet[3487]: I0527 17:03:08.165763 3487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:03:08.208641 sudo[3522]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 17:03:08.208861 sudo[3522]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 17:03:08.573699 sudo[3522]: pam_unix(sudo:session): session closed for user root May 27 17:03:09.055594 systemd[1]: Created slice kubepods-besteffort-podf3cf5972_3f03_4214_87f4_7626a36cb860.slice - libcontainer container kubepods-besteffort-podf3cf5972_3f03_4214_87f4_7626a36cb860.slice. May 27 17:03:09.078450 kubelet[3487]: I0527 17:03:09.078346 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3cf5972-3f03-4214-87f4-7626a36cb860-xtables-lock\") pod \"kube-proxy-clwvz\" (UID: \"f3cf5972-3f03-4214-87f4-7626a36cb860\") " pod="kube-system/kube-proxy-clwvz" May 27 17:03:09.078450 kubelet[3487]: I0527 17:03:09.078388 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3cf5972-3f03-4214-87f4-7626a36cb860-kube-proxy\") pod \"kube-proxy-clwvz\" (UID: \"f3cf5972-3f03-4214-87f4-7626a36cb860\") " pod="kube-system/kube-proxy-clwvz" May 27 17:03:09.078450 kubelet[3487]: I0527 17:03:09.078399 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3cf5972-3f03-4214-87f4-7626a36cb860-lib-modules\") pod \"kube-proxy-clwvz\" (UID: \"f3cf5972-3f03-4214-87f4-7626a36cb860\") " pod="kube-system/kube-proxy-clwvz" May 27 17:03:09.078450 kubelet[3487]: I0527 17:03:09.078411 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qml5s\" (UniqueName: \"kubernetes.io/projected/f3cf5972-3f03-4214-87f4-7626a36cb860-kube-api-access-qml5s\") pod \"kube-proxy-clwvz\" (UID: \"f3cf5972-3f03-4214-87f4-7626a36cb860\") " pod="kube-system/kube-proxy-clwvz" May 27 17:03:09.364464 containerd[1886]: time="2025-05-27T17:03:09.364288848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clwvz,Uid:f3cf5972-3f03-4214-87f4-7626a36cb860,Namespace:kube-system,Attempt:0,}" May 27 17:03:10.540836 systemd[1]: Created slice kubepods-burstable-podd9971166_7dc7_4ddb_ace6_63889aeb6c05.slice - libcontainer container kubepods-burstable-podd9971166_7dc7_4ddb_ace6_63889aeb6c05.slice. May 27 17:03:10.588924 kubelet[3487]: I0527 17:03:10.588844 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-lib-modules\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.588924 kubelet[3487]: I0527 17:03:10.588888 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9971166-7dc7-4ddb-ace6-63889aeb6c05-clustermesh-secrets\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.588924 kubelet[3487]: I0527 17:03:10.588917 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-run\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589401 kubelet[3487]: I0527 17:03:10.588968 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cni-path\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589401 kubelet[3487]: I0527 17:03:10.588986 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-net\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589401 kubelet[3487]: I0527 17:03:10.588997 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hubble-tls\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589401 kubelet[3487]: I0527 17:03:10.589011 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jnn8\" (UniqueName: \"kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-kube-api-access-4jnn8\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589401 kubelet[3487]: I0527 17:03:10.589036 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-bpf-maps\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589401 kubelet[3487]: I0527 17:03:10.589046 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-xtables-lock\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589495 kubelet[3487]: I0527 17:03:10.589055 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-config-path\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589495 kubelet[3487]: I0527 17:03:10.589080 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hostproc\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589495 kubelet[3487]: I0527 17:03:10.589122 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-cgroup\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589495 kubelet[3487]: I0527 17:03:10.589134 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-kernel\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.589495 kubelet[3487]: I0527 17:03:10.589158 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-etc-cni-netd\") pod \"cilium-f7dt8\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " pod="kube-system/cilium-f7dt8" May 27 17:03:10.629101 systemd[1]: Created slice kubepods-besteffort-pod12031f1b_69ed_43c7_a969_0b0a5630d402.slice - libcontainer container kubepods-besteffort-pod12031f1b_69ed_43c7_a969_0b0a5630d402.slice. May 27 17:03:10.689511 kubelet[3487]: I0527 17:03:10.689457 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12031f1b-69ed-43c7-a969-0b0a5630d402-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-b78ck\" (UID: \"12031f1b-69ed-43c7-a969-0b0a5630d402\") " pod="kube-system/cilium-operator-6c4d7847fc-b78ck" May 27 17:03:10.689511 kubelet[3487]: I0527 17:03:10.689489 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7gf5\" (UniqueName: \"kubernetes.io/projected/12031f1b-69ed-43c7-a969-0b0a5630d402-kube-api-access-b7gf5\") pod \"cilium-operator-6c4d7847fc-b78ck\" (UID: \"12031f1b-69ed-43c7-a969-0b0a5630d402\") " pod="kube-system/cilium-operator-6c4d7847fc-b78ck" May 27 17:03:10.846626 containerd[1886]: time="2025-05-27T17:03:10.846221408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7dt8,Uid:d9971166-7dc7-4ddb-ace6-63889aeb6c05,Namespace:kube-system,Attempt:0,}" May 27 17:03:10.932022 containerd[1886]: time="2025-05-27T17:03:10.931973180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b78ck,Uid:12031f1b-69ed-43c7-a969-0b0a5630d402,Namespace:kube-system,Attempt:0,}" May 27 17:03:10.985163 containerd[1886]: time="2025-05-27T17:03:10.985114769Z" level=info msg="connecting to shim 14999915dc5497b9a4364ac514793c0cfa8b8f0e28b031eb2b57bde9ae5ec685" address="unix:///run/containerd/s/9418e48507d318b12f77c313743d6b4412fda14c8237d15e59183fb589d72fc9" namespace=k8s.io protocol=ttrpc version=3 May 27 17:03:11.005536 systemd[1]: Started cri-containerd-14999915dc5497b9a4364ac514793c0cfa8b8f0e28b031eb2b57bde9ae5ec685.scope - libcontainer container 14999915dc5497b9a4364ac514793c0cfa8b8f0e28b031eb2b57bde9ae5ec685. May 27 17:03:11.139236 containerd[1886]: time="2025-05-27T17:03:11.139041435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clwvz,Uid:f3cf5972-3f03-4214-87f4-7626a36cb860,Namespace:kube-system,Attempt:0,} returns sandbox id \"14999915dc5497b9a4364ac514793c0cfa8b8f0e28b031eb2b57bde9ae5ec685\"" May 27 17:03:11.186857 containerd[1886]: time="2025-05-27T17:03:11.186115065Z" level=info msg="CreateContainer within sandbox \"14999915dc5497b9a4364ac514793c0cfa8b8f0e28b031eb2b57bde9ae5ec685\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:03:11.623887 containerd[1886]: time="2025-05-27T17:03:11.623831642Z" level=info msg="Container 2477982069f4edebcd1c4b96e4dd6161029975a7447ce318f108add019325941: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:11.794631 containerd[1886]: time="2025-05-27T17:03:11.794488305Z" level=info msg="connecting to shim fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138" address="unix:///run/containerd/s/7ed6a73a5e01a4cf1c42346bd074d2b56a2e022dc63c425ce241b2e09d7b1780" namespace=k8s.io protocol=ttrpc version=3 May 27 17:03:11.814516 systemd[1]: Started cri-containerd-fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138.scope - libcontainer container fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138. May 27 17:03:11.976627 containerd[1886]: time="2025-05-27T17:03:11.976500636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7dt8,Uid:d9971166-7dc7-4ddb-ace6-63889aeb6c05,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\"" May 27 17:03:11.978494 containerd[1886]: time="2025-05-27T17:03:11.978392880Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 17:03:12.082164 containerd[1886]: time="2025-05-27T17:03:12.082075408Z" level=info msg="connecting to shim bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f" address="unix:///run/containerd/s/34e8c80e6bd25941aa15b663010db291a340433cbb37f43d3c7720b0e9e43ddf" namespace=k8s.io protocol=ttrpc version=3 May 27 17:03:12.104516 systemd[1]: Started cri-containerd-bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f.scope - libcontainer container bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f. May 27 17:03:12.178327 containerd[1886]: time="2025-05-27T17:03:12.178257724Z" level=info msg="CreateContainer within sandbox \"14999915dc5497b9a4364ac514793c0cfa8b8f0e28b031eb2b57bde9ae5ec685\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2477982069f4edebcd1c4b96e4dd6161029975a7447ce318f108add019325941\"" May 27 17:03:12.179602 containerd[1886]: time="2025-05-27T17:03:12.179545924Z" level=info msg="StartContainer for \"2477982069f4edebcd1c4b96e4dd6161029975a7447ce318f108add019325941\"" May 27 17:03:12.181168 containerd[1886]: time="2025-05-27T17:03:12.181126390Z" level=info msg="connecting to shim 2477982069f4edebcd1c4b96e4dd6161029975a7447ce318f108add019325941" address="unix:///run/containerd/s/9418e48507d318b12f77c313743d6b4412fda14c8237d15e59183fb589d72fc9" protocol=ttrpc version=3 May 27 17:03:12.196535 systemd[1]: Started cri-containerd-2477982069f4edebcd1c4b96e4dd6161029975a7447ce318f108add019325941.scope - libcontainer container 2477982069f4edebcd1c4b96e4dd6161029975a7447ce318f108add019325941. May 27 17:03:12.229224 containerd[1886]: time="2025-05-27T17:03:12.228993053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b78ck,Uid:12031f1b-69ed-43c7-a969-0b0a5630d402,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\"" May 27 17:03:12.232994 containerd[1886]: time="2025-05-27T17:03:12.232954849Z" level=info msg="StartContainer for \"2477982069f4edebcd1c4b96e4dd6161029975a7447ce318f108add019325941\" returns successfully" May 27 17:03:12.843007 kubelet[3487]: I0527 17:03:12.842949 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clwvz" podStartSLOduration=4.842930036 podStartE2EDuration="4.842930036s" podCreationTimestamp="2025-05-27 17:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:03:12.724560879 +0000 UTC m=+9.219584723" watchObservedRunningTime="2025-05-27 17:03:12.842930036 +0000 UTC m=+9.337953888" May 27 17:03:21.989189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888401573.mount: Deactivated successfully. May 27 17:03:24.857232 containerd[1886]: time="2025-05-27T17:03:24.857144446Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:24.861821 containerd[1886]: time="2025-05-27T17:03:24.861774369Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 27 17:03:24.865949 containerd[1886]: time="2025-05-27T17:03:24.865895227Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:24.867029 containerd[1886]: time="2025-05-27T17:03:24.866893203Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.888456562s" May 27 17:03:24.867029 containerd[1886]: time="2025-05-27T17:03:24.866931844Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 27 17:03:24.868300 containerd[1886]: time="2025-05-27T17:03:24.868044424Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 17:03:24.874491 containerd[1886]: time="2025-05-27T17:03:24.874452299Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:03:24.929039 containerd[1886]: time="2025-05-27T17:03:24.928993303Z" level=info msg="Container 7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:24.930088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406061557.mount: Deactivated successfully. May 27 17:03:24.974971 containerd[1886]: time="2025-05-27T17:03:24.974926642Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\"" May 27 17:03:24.975879 containerd[1886]: time="2025-05-27T17:03:24.975811910Z" level=info msg="StartContainer for \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\"" May 27 17:03:24.976812 containerd[1886]: time="2025-05-27T17:03:24.976760500Z" level=info msg="connecting to shim 7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0" address="unix:///run/containerd/s/7ed6a73a5e01a4cf1c42346bd074d2b56a2e022dc63c425ce241b2e09d7b1780" protocol=ttrpc version=3 May 27 17:03:24.999024 systemd[1]: Started cri-containerd-7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0.scope - libcontainer container 7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0. May 27 17:03:25.037051 containerd[1886]: time="2025-05-27T17:03:25.030899851Z" level=info msg="StartContainer for \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" returns successfully" May 27 17:03:25.043626 systemd[1]: cri-containerd-7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0.scope: Deactivated successfully. May 27 17:03:25.045540 containerd[1886]: time="2025-05-27T17:03:25.045439617Z" level=info msg="received exit event container_id:\"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" id:\"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" pid:3880 exited_at:{seconds:1748365405 nanos:44933913}" May 27 17:03:25.046224 containerd[1886]: time="2025-05-27T17:03:25.046184536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" id:\"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" pid:3880 exited_at:{seconds:1748365405 nanos:44933913}" May 27 17:03:25.926198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0-rootfs.mount: Deactivated successfully. May 27 17:03:32.777847 containerd[1886]: time="2025-05-27T17:03:32.777470818Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:03:32.988933 containerd[1886]: time="2025-05-27T17:03:32.988459788Z" level=info msg="Container 15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:33.129149 containerd[1886]: time="2025-05-27T17:03:33.129015857Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\"" May 27 17:03:33.129819 containerd[1886]: time="2025-05-27T17:03:33.129785985Z" level=info msg="StartContainer for \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\"" May 27 17:03:33.130929 containerd[1886]: time="2025-05-27T17:03:33.130854443Z" level=info msg="connecting to shim 15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a" address="unix:///run/containerd/s/7ed6a73a5e01a4cf1c42346bd074d2b56a2e022dc63c425ce241b2e09d7b1780" protocol=ttrpc version=3 May 27 17:03:33.153534 systemd[1]: Started cri-containerd-15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a.scope - libcontainer container 15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a. May 27 17:03:33.223088 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:03:33.223533 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:03:33.224135 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 17:03:33.226944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:03:33.228199 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:03:33.229732 systemd[1]: cri-containerd-15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a.scope: Deactivated successfully. May 27 17:03:33.232772 containerd[1886]: time="2025-05-27T17:03:33.232612206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" id:\"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" pid:3924 exited_at:{seconds:1748365413 nanos:231569118}" May 27 17:03:33.245993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:03:33.281055 containerd[1886]: time="2025-05-27T17:03:33.280914976Z" level=info msg="received exit event container_id:\"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" id:\"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" pid:3924 exited_at:{seconds:1748365413 nanos:231569118}" May 27 17:03:33.281927 containerd[1886]: time="2025-05-27T17:03:33.281860774Z" level=info msg="StartContainer for \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" returns successfully" May 27 17:03:33.987665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a-rootfs.mount: Deactivated successfully. May 27 17:03:34.770006 containerd[1886]: time="2025-05-27T17:03:34.769542300Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:03:35.083422 containerd[1886]: time="2025-05-27T17:03:35.083221574Z" level=info msg="Container 4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:35.087330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878363656.mount: Deactivated successfully. May 27 17:03:35.328726 containerd[1886]: time="2025-05-27T17:03:35.328604535Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\"" May 27 17:03:35.329360 containerd[1886]: time="2025-05-27T17:03:35.329313133Z" level=info msg="StartContainer for \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\"" May 27 17:03:35.332553 containerd[1886]: time="2025-05-27T17:03:35.332515937Z" level=info msg="connecting to shim 4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff" address="unix:///run/containerd/s/7ed6a73a5e01a4cf1c42346bd074d2b56a2e022dc63c425ce241b2e09d7b1780" protocol=ttrpc version=3 May 27 17:03:35.354525 systemd[1]: Started cri-containerd-4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff.scope - libcontainer container 4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff. May 27 17:03:35.384547 systemd[1]: cri-containerd-4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff.scope: Deactivated successfully. May 27 17:03:35.428376 containerd[1886]: time="2025-05-27T17:03:35.386993819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" id:\"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" pid:3973 exited_at:{seconds:1748365415 nanos:385931506}" May 27 17:03:35.429874 containerd[1886]: time="2025-05-27T17:03:35.429730655Z" level=info msg="received exit event container_id:\"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" id:\"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" pid:3973 exited_at:{seconds:1748365415 nanos:385931506}" May 27 17:03:35.436287 containerd[1886]: time="2025-05-27T17:03:35.436252131Z" level=info msg="StartContainer for \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" returns successfully" May 27 17:03:35.448211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff-rootfs.mount: Deactivated successfully. May 27 17:03:36.823654 containerd[1886]: time="2025-05-27T17:03:36.823618634Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:03:36.832822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000974886.mount: Deactivated successfully. May 27 17:03:38.333153 containerd[1886]: time="2025-05-27T17:03:38.333109351Z" level=info msg="Container cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:38.532628 containerd[1886]: time="2025-05-27T17:03:38.532577609Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\"" May 27 17:03:38.533433 containerd[1886]: time="2025-05-27T17:03:38.533241062Z" level=info msg="StartContainer for \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\"" May 27 17:03:38.534441 containerd[1886]: time="2025-05-27T17:03:38.534400530Z" level=info msg="connecting to shim cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8" address="unix:///run/containerd/s/7ed6a73a5e01a4cf1c42346bd074d2b56a2e022dc63c425ce241b2e09d7b1780" protocol=ttrpc version=3 May 27 17:03:38.554544 systemd[1]: Started cri-containerd-cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8.scope - libcontainer container cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8. May 27 17:03:38.584837 systemd[1]: cri-containerd-cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8.scope: Deactivated successfully. May 27 17:03:38.586541 containerd[1886]: time="2025-05-27T17:03:38.586495778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" id:\"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" pid:4023 exited_at:{seconds:1748365418 nanos:585983906}" May 27 17:03:38.627571 containerd[1886]: time="2025-05-27T17:03:38.627517384Z" level=info msg="received exit event container_id:\"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" id:\"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" pid:4023 exited_at:{seconds:1748365418 nanos:585983906}" May 27 17:03:38.628766 containerd[1886]: time="2025-05-27T17:03:38.628710182Z" level=info msg="StartContainer for \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" returns successfully" May 27 17:03:38.646112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8-rootfs.mount: Deactivated successfully. May 27 17:03:39.879409 containerd[1886]: time="2025-05-27T17:03:39.879329971Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:03:40.140977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274738710.mount: Deactivated successfully. May 27 17:03:40.142715 containerd[1886]: time="2025-05-27T17:03:40.142581323Z" level=info msg="Container c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:40.275760 containerd[1886]: time="2025-05-27T17:03:40.275638409Z" level=info msg="CreateContainer within sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\"" May 27 17:03:40.276520 containerd[1886]: time="2025-05-27T17:03:40.276457123Z" level=info msg="StartContainer for \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\"" May 27 17:03:40.277653 containerd[1886]: time="2025-05-27T17:03:40.277628880Z" level=info msg="connecting to shim c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f" address="unix:///run/containerd/s/7ed6a73a5e01a4cf1c42346bd074d2b56a2e022dc63c425ce241b2e09d7b1780" protocol=ttrpc version=3 May 27 17:03:40.300539 systemd[1]: Started cri-containerd-c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f.scope - libcontainer container c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f. May 27 17:03:40.375855 containerd[1886]: time="2025-05-27T17:03:40.375812217Z" level=info msg="StartContainer for \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" returns successfully" May 27 17:03:40.432278 containerd[1886]: time="2025-05-27T17:03:40.432151402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"c30477ffd4080afb42a60bf0d6cd828542327fa62e759711e62c0fe9db73616a\" pid:4101 exited_at:{seconds:1748365420 nanos:431833728}" May 27 17:03:40.525365 kubelet[3487]: I0527 17:03:40.525104 3487 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:03:40.632501 systemd[1]: Created slice kubepods-burstable-podba3d1167_3d58_451b_94ee_1a92ec00b821.slice - libcontainer container kubepods-burstable-podba3d1167_3d58_451b_94ee_1a92ec00b821.slice. May 27 17:03:40.717771 kubelet[3487]: I0527 17:03:40.692037 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c586m\" (UniqueName: \"kubernetes.io/projected/ba3d1167-3d58-451b-94ee-1a92ec00b821-kube-api-access-c586m\") pod \"coredns-674b8bbfcf-s9knj\" (UID: \"ba3d1167-3d58-451b-94ee-1a92ec00b821\") " pod="kube-system/coredns-674b8bbfcf-s9knj" May 27 17:03:40.717771 kubelet[3487]: I0527 17:03:40.692645 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba3d1167-3d58-451b-94ee-1a92ec00b821-config-volume\") pod \"coredns-674b8bbfcf-s9knj\" (UID: \"ba3d1167-3d58-451b-94ee-1a92ec00b821\") " pod="kube-system/coredns-674b8bbfcf-s9knj" May 27 17:03:40.789495 systemd[1]: Created slice kubepods-burstable-podf98254a0_31ab_4e1e_a054_b9e078fb699e.slice - libcontainer container kubepods-burstable-podf98254a0_31ab_4e1e_a054_b9e078fb699e.slice. May 27 17:03:40.872309 kubelet[3487]: I0527 17:03:40.871942 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f7dt8" podStartSLOduration=18.982010299 podStartE2EDuration="31.871923059s" podCreationTimestamp="2025-05-27 17:03:09 +0000 UTC" firstStartedPulling="2025-05-27 17:03:11.978012316 +0000 UTC m=+8.473036160" lastFinishedPulling="2025-05-27 17:03:24.867925052 +0000 UTC m=+21.362948920" observedRunningTime="2025-05-27 17:03:40.871419099 +0000 UTC m=+37.366442959" watchObservedRunningTime="2025-05-27 17:03:40.871923059 +0000 UTC m=+37.366946967" May 27 17:03:40.894036 kubelet[3487]: I0527 17:03:40.893987 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65m9\" (UniqueName: \"kubernetes.io/projected/f98254a0-31ab-4e1e-a054-b9e078fb699e-kube-api-access-j65m9\") pod \"coredns-674b8bbfcf-gl5fl\" (UID: \"f98254a0-31ab-4e1e-a054-b9e078fb699e\") " pod="kube-system/coredns-674b8bbfcf-gl5fl" May 27 17:03:40.894036 kubelet[3487]: I0527 17:03:40.894040 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f98254a0-31ab-4e1e-a054-b9e078fb699e-config-volume\") pod \"coredns-674b8bbfcf-gl5fl\" (UID: \"f98254a0-31ab-4e1e-a054-b9e078fb699e\") " pod="kube-system/coredns-674b8bbfcf-gl5fl" May 27 17:03:40.936688 containerd[1886]: time="2025-05-27T17:03:40.936643965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s9knj,Uid:ba3d1167-3d58-451b-94ee-1a92ec00b821,Namespace:kube-system,Attempt:0,}" May 27 17:03:41.021373 containerd[1886]: time="2025-05-27T17:03:41.021317428Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:41.067400 containerd[1886]: time="2025-05-27T17:03:41.067333336Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 27 17:03:41.099373 containerd[1886]: time="2025-05-27T17:03:41.099259319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gl5fl,Uid:f98254a0-31ab-4e1e-a054-b9e078fb699e,Namespace:kube-system,Attempt:0,}" May 27 17:03:41.134165 containerd[1886]: time="2025-05-27T17:03:41.134067833Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:41.179111 containerd[1886]: time="2025-05-27T17:03:41.179058180Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 16.310985172s" May 27 17:03:41.179111 containerd[1886]: time="2025-05-27T17:03:41.179102029Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 27 17:03:41.233910 containerd[1886]: time="2025-05-27T17:03:41.233871629Z" level=info msg="CreateContainer within sandbox \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 17:03:41.380537 containerd[1886]: time="2025-05-27T17:03:41.380399956Z" level=info msg="Container 9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:41.481042 containerd[1886]: time="2025-05-27T17:03:41.480937591Z" level=info msg="CreateContainer within sandbox \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\"" May 27 17:03:41.481873 containerd[1886]: time="2025-05-27T17:03:41.481801355Z" level=info msg="StartContainer for \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\"" May 27 17:03:41.483003 containerd[1886]: time="2025-05-27T17:03:41.482976952Z" level=info msg="connecting to shim 9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d" address="unix:///run/containerd/s/34e8c80e6bd25941aa15b663010db291a340433cbb37f43d3c7720b0e9e43ddf" protocol=ttrpc version=3 May 27 17:03:41.501515 systemd[1]: Started cri-containerd-9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d.scope - libcontainer container 9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d. May 27 17:03:41.534364 containerd[1886]: time="2025-05-27T17:03:41.534245713Z" level=info msg="StartContainer for \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" returns successfully" May 27 17:03:42.216338 containerd[1886]: time="2025-05-27T17:03:42.216273669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"86229ccc0dda92fd9e25f382ba1de6eb0b5a7574b54b96a7ddc8ca5f7400903e\" pid:4244 exit_status:1 exited_at:{seconds:1748365422 nanos:215928866}" May 27 17:03:45.231932 containerd[1886]: time="2025-05-27T17:03:45.231888641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"e1c645b38ad10f9d7e242a04b6baa83fb4be6f91d5f0cfb9d53b58a68d550c2d\" pid:4268 exit_status:1 exited_at:{seconds:1748365425 nanos:231520694}" May 27 17:03:45.395934 systemd-networkd[1668]: cilium_host: Link UP May 27 17:03:45.396058 systemd-networkd[1668]: cilium_net: Link UP May 27 17:03:45.396503 systemd-networkd[1668]: cilium_net: Gained carrier May 27 17:03:45.397384 systemd-networkd[1668]: cilium_host: Gained carrier May 27 17:03:45.568043 systemd-networkd[1668]: cilium_vxlan: Link UP May 27 17:03:45.568048 systemd-networkd[1668]: cilium_vxlan: Gained carrier May 27 17:03:45.707462 systemd-networkd[1668]: cilium_host: Gained IPv6LL May 27 17:03:45.830369 kernel: NET: Registered PF_ALG protocol family May 27 17:03:45.948474 systemd-networkd[1668]: cilium_net: Gained IPv6LL May 27 17:03:46.438043 systemd-networkd[1668]: lxc_health: Link UP May 27 17:03:46.441482 systemd-networkd[1668]: lxc_health: Gained carrier May 27 17:03:46.602879 systemd-networkd[1668]: lxc00b54fea82bf: Link UP May 27 17:03:46.609369 kernel: eth0: renamed from tmpf2f7f May 27 17:03:46.609372 systemd-networkd[1668]: lxc00b54fea82bf: Gained carrier May 27 17:03:46.761366 systemd-networkd[1668]: lxc0f4ac6bd1d62: Link UP May 27 17:03:46.780390 kernel: eth0: renamed from tmp37496 May 27 17:03:46.785438 systemd-networkd[1668]: lxc0f4ac6bd1d62: Gained carrier May 27 17:03:46.875591 kubelet[3487]: I0527 17:03:46.875519 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-b78ck" podStartSLOduration=8.925817352 podStartE2EDuration="37.875504348s" podCreationTimestamp="2025-05-27 17:03:09 +0000 UTC" firstStartedPulling="2025-05-27 17:03:12.232238387 +0000 UTC m=+8.727262367" lastFinishedPulling="2025-05-27 17:03:41.181925511 +0000 UTC m=+37.676949363" observedRunningTime="2025-05-27 17:03:41.922897966 +0000 UTC m=+38.417921810" watchObservedRunningTime="2025-05-27 17:03:46.875504348 +0000 UTC m=+43.370528192" May 27 17:03:47.035505 systemd-networkd[1668]: cilium_vxlan: Gained IPv6LL May 27 17:03:47.314191 containerd[1886]: time="2025-05-27T17:03:47.313840200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"917ac27295ce2893f231e5b7b5271dbc0654ee7b70a7af73f2485088e3f5a563\" pid:4659 exited_at:{seconds:1748365427 nanos:313109225}" May 27 17:03:47.739616 systemd-networkd[1668]: lxc00b54fea82bf: Gained IPv6LL May 27 17:03:48.123506 systemd-networkd[1668]: lxc_health: Gained IPv6LL May 27 17:03:48.827552 systemd-networkd[1668]: lxc0f4ac6bd1d62: Gained IPv6LL May 27 17:03:49.438505 containerd[1886]: time="2025-05-27T17:03:49.438462373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"2c7a0839dd9110f3cde87ad49b660b86deb0c1cc062ec91c20b7d4161809dc2b\" pid:4689 exited_at:{seconds:1748365429 nanos:437962997}" May 27 17:03:49.541079 containerd[1886]: time="2025-05-27T17:03:49.540447097Z" level=info msg="connecting to shim f2f7f6d3a2e0882e32a7463db52235ec5ccc10bb7ec8d6e6a8067b6f0baa57aa" address="unix:///run/containerd/s/67b915c6d648ef183951805b7b95fb60cc93e65fcb67017998a104dd98fecd62" namespace=k8s.io protocol=ttrpc version=3 May 27 17:03:49.541079 containerd[1886]: time="2025-05-27T17:03:49.540991202Z" level=info msg="connecting to shim 374964c4173b9fad9d9c62051e97bbf913122eaf334d51eed8bfee27e35b0c22" address="unix:///run/containerd/s/788ad2787b273d3b475b0f9bfa515b23589698827d76c7233bd7471bcddc102c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:03:49.564638 systemd[1]: Started cri-containerd-374964c4173b9fad9d9c62051e97bbf913122eaf334d51eed8bfee27e35b0c22.scope - libcontainer container 374964c4173b9fad9d9c62051e97bbf913122eaf334d51eed8bfee27e35b0c22. May 27 17:03:49.580543 systemd[1]: Started cri-containerd-f2f7f6d3a2e0882e32a7463db52235ec5ccc10bb7ec8d6e6a8067b6f0baa57aa.scope - libcontainer container f2f7f6d3a2e0882e32a7463db52235ec5ccc10bb7ec8d6e6a8067b6f0baa57aa. May 27 17:03:49.612773 containerd[1886]: time="2025-05-27T17:03:49.612730552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gl5fl,Uid:f98254a0-31ab-4e1e-a054-b9e078fb699e,Namespace:kube-system,Attempt:0,} returns sandbox id \"374964c4173b9fad9d9c62051e97bbf913122eaf334d51eed8bfee27e35b0c22\"" May 27 17:03:49.624376 containerd[1886]: time="2025-05-27T17:03:49.622918328Z" level=info msg="CreateContainer within sandbox \"374964c4173b9fad9d9c62051e97bbf913122eaf334d51eed8bfee27e35b0c22\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:03:49.631999 containerd[1886]: time="2025-05-27T17:03:49.631961324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s9knj,Uid:ba3d1167-3d58-451b-94ee-1a92ec00b821,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2f7f6d3a2e0882e32a7463db52235ec5ccc10bb7ec8d6e6a8067b6f0baa57aa\"" May 27 17:03:49.641766 containerd[1886]: time="2025-05-27T17:03:49.641723463Z" level=info msg="CreateContainer within sandbox \"f2f7f6d3a2e0882e32a7463db52235ec5ccc10bb7ec8d6e6a8067b6f0baa57aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:03:49.663753 containerd[1886]: time="2025-05-27T17:03:49.663638287Z" level=info msg="Container 7aa4c0f2203b962c80aaade53c1e39ec0068ae459ce430dfe225d34b621a443e: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:49.685965 containerd[1886]: time="2025-05-27T17:03:49.685913331Z" level=info msg="CreateContainer within sandbox \"374964c4173b9fad9d9c62051e97bbf913122eaf334d51eed8bfee27e35b0c22\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7aa4c0f2203b962c80aaade53c1e39ec0068ae459ce430dfe225d34b621a443e\"" May 27 17:03:49.686695 containerd[1886]: time="2025-05-27T17:03:49.686673499Z" level=info msg="StartContainer for \"7aa4c0f2203b962c80aaade53c1e39ec0068ae459ce430dfe225d34b621a443e\"" May 27 17:03:49.688217 containerd[1886]: time="2025-05-27T17:03:49.687995212Z" level=info msg="connecting to shim 7aa4c0f2203b962c80aaade53c1e39ec0068ae459ce430dfe225d34b621a443e" address="unix:///run/containerd/s/788ad2787b273d3b475b0f9bfa515b23589698827d76c7233bd7471bcddc102c" protocol=ttrpc version=3 May 27 17:03:49.691803 containerd[1886]: time="2025-05-27T17:03:49.691701248Z" level=info msg="Container 03b13cde771db73b6d9ea85363088e5f369357cbace5c6b48f21fb9f1e075340: CDI devices from CRI Config.CDIDevices: []" May 27 17:03:49.705552 systemd[1]: Started cri-containerd-7aa4c0f2203b962c80aaade53c1e39ec0068ae459ce430dfe225d34b621a443e.scope - libcontainer container 7aa4c0f2203b962c80aaade53c1e39ec0068ae459ce430dfe225d34b621a443e. May 27 17:03:49.712258 containerd[1886]: time="2025-05-27T17:03:49.712190556Z" level=info msg="CreateContainer within sandbox \"f2f7f6d3a2e0882e32a7463db52235ec5ccc10bb7ec8d6e6a8067b6f0baa57aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03b13cde771db73b6d9ea85363088e5f369357cbace5c6b48f21fb9f1e075340\"" May 27 17:03:49.713401 containerd[1886]: time="2025-05-27T17:03:49.713016038Z" level=info msg="StartContainer for \"03b13cde771db73b6d9ea85363088e5f369357cbace5c6b48f21fb9f1e075340\"" May 27 17:03:49.714288 containerd[1886]: time="2025-05-27T17:03:49.714257813Z" level=info msg="connecting to shim 03b13cde771db73b6d9ea85363088e5f369357cbace5c6b48f21fb9f1e075340" address="unix:///run/containerd/s/67b915c6d648ef183951805b7b95fb60cc93e65fcb67017998a104dd98fecd62" protocol=ttrpc version=3 May 27 17:03:49.739688 systemd[1]: Started cri-containerd-03b13cde771db73b6d9ea85363088e5f369357cbace5c6b48f21fb9f1e075340.scope - libcontainer container 03b13cde771db73b6d9ea85363088e5f369357cbace5c6b48f21fb9f1e075340. May 27 17:03:49.752888 containerd[1886]: time="2025-05-27T17:03:49.752651427Z" level=info msg="StartContainer for \"7aa4c0f2203b962c80aaade53c1e39ec0068ae459ce430dfe225d34b621a443e\" returns successfully" May 27 17:03:49.780276 containerd[1886]: time="2025-05-27T17:03:49.780239134Z" level=info msg="StartContainer for \"03b13cde771db73b6d9ea85363088e5f369357cbace5c6b48f21fb9f1e075340\" returns successfully" May 27 17:03:49.868881 kubelet[3487]: I0527 17:03:49.868005 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gl5fl" podStartSLOduration=40.867986827 podStartE2EDuration="40.867986827s" podCreationTimestamp="2025-05-27 17:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:03:49.850061952 +0000 UTC m=+46.345085796" watchObservedRunningTime="2025-05-27 17:03:49.867986827 +0000 UTC m=+46.363010679" May 27 17:03:49.892462 kubelet[3487]: I0527 17:03:49.892396 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s9knj" podStartSLOduration=40.892377961 podStartE2EDuration="40.892377961s" podCreationTimestamp="2025-05-27 17:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:03:49.892149314 +0000 UTC m=+46.387173158" watchObservedRunningTime="2025-05-27 17:03:49.892377961 +0000 UTC m=+46.387401805" May 27 17:03:50.528277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395383238.mount: Deactivated successfully. May 27 17:03:51.512786 containerd[1886]: time="2025-05-27T17:03:51.512738619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"5fa6a942046b2495c14dfd0c0b20ca39c373db05244e859210871de565cfc530\" pid:4880 exited_at:{seconds:1748365431 nanos:512402336}" May 27 17:03:51.517110 kubelet[3487]: E0527 17:03:51.517065 3487 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54362->127.0.0.1:39123: write tcp 127.0.0.1:54362->127.0.0.1:39123: write: connection reset by peer May 27 17:03:51.627426 containerd[1886]: time="2025-05-27T17:03:51.627364844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"eacac23824709fd05dde51988db8d7672d087c8bef8d8be334eb6171bd19768e\" pid:4910 exited_at:{seconds:1748365431 nanos:626618669}" May 27 17:03:51.864477 sudo[2364]: pam_unix(sudo:session): session closed for user root May 27 17:03:51.938215 sshd[2363]: Connection closed by 10.200.16.10 port 43374 May 27 17:03:51.937729 sshd-session[2361]: pam_unix(sshd:session): session closed for user core May 27 17:03:51.940507 systemd-logind[1856]: Session 9 logged out. Waiting for processes to exit. May 27 17:03:51.941133 systemd[1]: sshd@6-10.200.20.45:22-10.200.16.10:43374.service: Deactivated successfully. May 27 17:03:51.943938 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:03:51.944158 systemd[1]: session-9.scope: Consumed 3.651s CPU time, 272.3M memory peak. May 27 17:03:51.946852 systemd-logind[1856]: Removed session 9. May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.857069 1859 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.857120 1859 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.857419 1859 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.857910 1859 omaha_request_params.cc:62] Current group set to alpha May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.858006 1859 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.858012 1859 update_attempter.cc:643] Scheduling an action processor start. May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.858029 1859 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.858057 1859 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.858111 1859 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.858117 1859 omaha_request_action.cc:272] Request: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: May 27 17:04:47.858367 update_engine[1859]: I20250527 17:04:47.858122 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:04:47.859208 locksmithd[1963]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 17:04:47.859398 update_engine[1859]: I20250527 17:04:47.859302 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:04:47.859777 update_engine[1859]: I20250527 17:04:47.859750 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:04:47.930141 update_engine[1859]: E20250527 17:04:47.930070 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:04:47.930286 update_engine[1859]: I20250527 17:04:47.930178 1859 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 17:04:57.806597 update_engine[1859]: I20250527 17:04:57.806524 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:04:57.806941 update_engine[1859]: I20250527 17:04:57.806757 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:04:57.807041 update_engine[1859]: I20250527 17:04:57.807015 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:04:57.908606 update_engine[1859]: E20250527 17:04:57.908532 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:04:57.908771 update_engine[1859]: I20250527 17:04:57.908628 1859 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 17:05:07.811001 update_engine[1859]: I20250527 17:05:07.810918 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:05:07.811394 update_engine[1859]: I20250527 17:05:07.811167 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:05:07.811487 update_engine[1859]: I20250527 17:05:07.811456 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:05:07.911302 update_engine[1859]: E20250527 17:05:07.911221 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:05:07.911469 update_engine[1859]: I20250527 17:05:07.911326 1859 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 17:05:17.809130 update_engine[1859]: I20250527 17:05:17.809046 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:05:17.809507 update_engine[1859]: I20250527 17:05:17.809295 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:05:17.809637 update_engine[1859]: I20250527 17:05:17.809573 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:05:17.909118 update_engine[1859]: E20250527 17:05:17.909019 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:05:17.909118 update_engine[1859]: I20250527 17:05:17.909107 1859 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 17:05:17.909118 update_engine[1859]: I20250527 17:05:17.909113 1859 omaha_request_action.cc:617] Omaha request response: May 27 17:05:17.909396 update_engine[1859]: E20250527 17:05:17.909216 1859 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909232 1859 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909237 1859 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909240 1859 update_attempter.cc:306] Processing Done. May 27 17:05:17.909396 update_engine[1859]: E20250527 17:05:17.909251 1859 update_attempter.cc:619] Update failed. May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909256 1859 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909260 1859 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909263 1859 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909327 1859 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909372 1859 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909377 1859 omaha_request_action.cc:272] Request: May 27 17:05:17.909396 update_engine[1859]: May 27 17:05:17.909396 update_engine[1859]: May 27 17:05:17.909396 update_engine[1859]: May 27 17:05:17.909396 update_engine[1859]: May 27 17:05:17.909396 update_engine[1859]: May 27 17:05:17.909396 update_engine[1859]: May 27 17:05:17.909396 update_engine[1859]: I20250527 17:05:17.909382 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:05:17.909624 update_engine[1859]: I20250527 17:05:17.909519 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:05:17.909934 update_engine[1859]: I20250527 17:05:17.909755 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:05:17.909993 locksmithd[1963]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 17:05:17.919010 update_engine[1859]: E20250527 17:05:17.918934 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:05:17.919010 update_engine[1859]: I20250527 17:05:17.919023 1859 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 17:05:17.919383 update_engine[1859]: I20250527 17:05:17.919030 1859 omaha_request_action.cc:617] Omaha request response: May 27 17:05:17.919383 update_engine[1859]: I20250527 17:05:17.919037 1859 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:05:17.919383 update_engine[1859]: I20250527 17:05:17.919041 1859 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:05:17.919383 update_engine[1859]: I20250527 17:05:17.919044 1859 update_attempter.cc:306] Processing Done. May 27 17:05:17.919383 update_engine[1859]: I20250527 17:05:17.919049 1859 update_attempter.cc:310] Error event sent. May 27 17:05:17.919383 update_engine[1859]: I20250527 17:05:17.919059 1859 update_check_scheduler.cc:74] Next update check in 42m7s May 27 17:05:17.919537 locksmithd[1963]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 17:05:58.315018 systemd[1]: Started sshd@7-10.200.20.45:22-10.200.16.10:42150.service - OpenSSH per-connection server daemon (10.200.16.10:42150). May 27 17:05:58.808077 sshd[4958]: Accepted publickey for core from 10.200.16.10 port 42150 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:05:58.809301 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:05:58.814523 systemd-logind[1856]: New session 10 of user core. May 27 17:05:58.823543 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:05:59.242715 sshd[4960]: Connection closed by 10.200.16.10 port 42150 May 27 17:05:59.243430 sshd-session[4958]: pam_unix(sshd:session): session closed for user core May 27 17:05:59.247098 systemd[1]: sshd@7-10.200.20.45:22-10.200.16.10:42150.service: Deactivated successfully. May 27 17:05:59.249606 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:05:59.251059 systemd-logind[1856]: Session 10 logged out. Waiting for processes to exit. May 27 17:05:59.252696 systemd-logind[1856]: Removed session 10. May 27 17:06:04.325693 systemd[1]: Started sshd@8-10.200.20.45:22-10.200.16.10:36216.service - OpenSSH per-connection server daemon (10.200.16.10:36216). May 27 17:06:04.775477 sshd[4975]: Accepted publickey for core from 10.200.16.10 port 36216 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:04.777528 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:04.781888 systemd-logind[1856]: New session 11 of user core. May 27 17:06:04.792550 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:06:05.163545 sshd[4977]: Connection closed by 10.200.16.10 port 36216 May 27 17:06:05.164294 sshd-session[4975]: pam_unix(sshd:session): session closed for user core May 27 17:06:05.167950 systemd[1]: sshd@8-10.200.20.45:22-10.200.16.10:36216.service: Deactivated successfully. May 27 17:06:05.170874 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:06:05.171668 systemd-logind[1856]: Session 11 logged out. Waiting for processes to exit. May 27 17:06:05.173713 systemd-logind[1856]: Removed session 11. May 27 17:06:10.248560 systemd[1]: Started sshd@9-10.200.20.45:22-10.200.16.10:56882.service - OpenSSH per-connection server daemon (10.200.16.10:56882). May 27 17:06:10.701449 sshd[4991]: Accepted publickey for core from 10.200.16.10 port 56882 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:10.702717 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:10.711376 systemd-logind[1856]: New session 12 of user core. May 27 17:06:10.714555 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:06:11.070473 sshd[4996]: Connection closed by 10.200.16.10 port 56882 May 27 17:06:11.071063 sshd-session[4991]: pam_unix(sshd:session): session closed for user core May 27 17:06:11.074824 systemd[1]: sshd@9-10.200.20.45:22-10.200.16.10:56882.service: Deactivated successfully. May 27 17:06:11.077845 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:06:11.080985 systemd-logind[1856]: Session 12 logged out. Waiting for processes to exit. May 27 17:06:11.082832 systemd-logind[1856]: Removed session 12. May 27 17:06:16.159050 systemd[1]: Started sshd@10-10.200.20.45:22-10.200.16.10:56892.service - OpenSSH per-connection server daemon (10.200.16.10:56892). May 27 17:06:16.641517 sshd[5011]: Accepted publickey for core from 10.200.16.10 port 56892 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:16.642908 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:16.647123 systemd-logind[1856]: New session 13 of user core. May 27 17:06:16.656547 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:06:17.029387 sshd[5013]: Connection closed by 10.200.16.10 port 56892 May 27 17:06:17.030121 sshd-session[5011]: pam_unix(sshd:session): session closed for user core May 27 17:06:17.035038 systemd[1]: sshd@10-10.200.20.45:22-10.200.16.10:56892.service: Deactivated successfully. May 27 17:06:17.036743 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:06:17.037441 systemd-logind[1856]: Session 13 logged out. Waiting for processes to exit. May 27 17:06:17.038963 systemd-logind[1856]: Removed session 13. May 27 17:06:17.110849 systemd[1]: Started sshd@11-10.200.20.45:22-10.200.16.10:56904.service - OpenSSH per-connection server daemon (10.200.16.10:56904). May 27 17:06:17.562915 sshd[5026]: Accepted publickey for core from 10.200.16.10 port 56904 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:17.564144 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:17.568199 systemd-logind[1856]: New session 14 of user core. May 27 17:06:17.574517 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:06:17.977568 sshd[5028]: Connection closed by 10.200.16.10 port 56904 May 27 17:06:17.977561 sshd-session[5026]: pam_unix(sshd:session): session closed for user core May 27 17:06:17.980792 systemd-logind[1856]: Session 14 logged out. Waiting for processes to exit. May 27 17:06:17.980961 systemd[1]: sshd@11-10.200.20.45:22-10.200.16.10:56904.service: Deactivated successfully. May 27 17:06:17.983280 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:06:17.985446 systemd-logind[1856]: Removed session 14. May 27 17:06:18.064600 systemd[1]: Started sshd@12-10.200.20.45:22-10.200.16.10:56908.service - OpenSSH per-connection server daemon (10.200.16.10:56908). May 27 17:06:18.544411 sshd[5038]: Accepted publickey for core from 10.200.16.10 port 56908 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:18.545535 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:18.549604 systemd-logind[1856]: New session 15 of user core. May 27 17:06:18.564570 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:06:18.939323 sshd[5040]: Connection closed by 10.200.16.10 port 56908 May 27 17:06:18.940077 sshd-session[5038]: pam_unix(sshd:session): session closed for user core May 27 17:06:18.943052 systemd-logind[1856]: Session 15 logged out. Waiting for processes to exit. May 27 17:06:18.943203 systemd[1]: sshd@12-10.200.20.45:22-10.200.16.10:56908.service: Deactivated successfully. May 27 17:06:18.945030 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:06:18.949922 systemd-logind[1856]: Removed session 15. May 27 17:06:24.026091 systemd[1]: Started sshd@13-10.200.20.45:22-10.200.16.10:51204.service - OpenSSH per-connection server daemon (10.200.16.10:51204). May 27 17:06:24.474273 sshd[5056]: Accepted publickey for core from 10.200.16.10 port 51204 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:24.475543 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:24.479608 systemd-logind[1856]: New session 16 of user core. May 27 17:06:24.484625 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:06:24.861441 sshd[5058]: Connection closed by 10.200.16.10 port 51204 May 27 17:06:24.862891 sshd-session[5056]: pam_unix(sshd:session): session closed for user core May 27 17:06:24.866255 systemd[1]: sshd@13-10.200.20.45:22-10.200.16.10:51204.service: Deactivated successfully. May 27 17:06:24.871381 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:06:24.873016 systemd-logind[1856]: Session 16 logged out. Waiting for processes to exit. May 27 17:06:24.874723 systemd-logind[1856]: Removed session 16. May 27 17:06:33.035338 systemd[1]: Started sshd@14-10.200.20.45:22-10.200.16.10:33964.service - OpenSSH per-connection server daemon (10.200.16.10:33964). May 27 17:06:33.525323 sshd[5070]: Accepted publickey for core from 10.200.16.10 port 33964 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:33.526723 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:33.531532 systemd-logind[1856]: New session 17 of user core. May 27 17:06:33.542563 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:06:33.915257 sshd[5072]: Connection closed by 10.200.16.10 port 33964 May 27 17:06:33.915796 sshd-session[5070]: pam_unix(sshd:session): session closed for user core May 27 17:06:33.919556 systemd[1]: sshd@14-10.200.20.45:22-10.200.16.10:33964.service: Deactivated successfully. May 27 17:06:33.921957 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:06:33.922942 systemd-logind[1856]: Session 17 logged out. Waiting for processes to exit. May 27 17:06:33.925014 systemd-logind[1856]: Removed session 17. May 27 17:06:34.003142 systemd[1]: Started sshd@15-10.200.20.45:22-10.200.16.10:33974.service - OpenSSH per-connection server daemon (10.200.16.10:33974). May 27 17:06:34.492080 sshd[5085]: Accepted publickey for core from 10.200.16.10 port 33974 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:34.493536 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:34.498401 systemd-logind[1856]: New session 18 of user core. May 27 17:06:34.507527 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:06:34.909973 sshd[5087]: Connection closed by 10.200.16.10 port 33974 May 27 17:06:34.910515 sshd-session[5085]: pam_unix(sshd:session): session closed for user core May 27 17:06:34.914033 systemd[1]: sshd@15-10.200.20.45:22-10.200.16.10:33974.service: Deactivated successfully. May 27 17:06:34.915924 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:06:34.918143 systemd-logind[1856]: Session 18 logged out. Waiting for processes to exit. May 27 17:06:34.919197 systemd-logind[1856]: Removed session 18. May 27 17:06:34.995039 systemd[1]: Started sshd@16-10.200.20.45:22-10.200.16.10:33984.service - OpenSSH per-connection server daemon (10.200.16.10:33984). May 27 17:06:35.442154 sshd[5096]: Accepted publickey for core from 10.200.16.10 port 33984 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:35.443455 sshd-session[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:35.447688 systemd-logind[1856]: New session 19 of user core. May 27 17:06:35.457783 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:06:36.403330 sshd[5098]: Connection closed by 10.200.16.10 port 33984 May 27 17:06:36.404023 sshd-session[5096]: pam_unix(sshd:session): session closed for user core May 27 17:06:36.409556 systemd[1]: sshd@16-10.200.20.45:22-10.200.16.10:33984.service: Deactivated successfully. May 27 17:06:36.409583 systemd-logind[1856]: Session 19 logged out. Waiting for processes to exit. May 27 17:06:36.413300 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:06:36.415205 systemd-logind[1856]: Removed session 19. May 27 17:06:36.494703 systemd[1]: Started sshd@17-10.200.20.45:22-10.200.16.10:33996.service - OpenSSH per-connection server daemon (10.200.16.10:33996). May 27 17:06:36.984263 sshd[5115]: Accepted publickey for core from 10.200.16.10 port 33996 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:36.985539 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:36.989650 systemd-logind[1856]: New session 20 of user core. May 27 17:06:36.996693 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:06:37.470246 sshd[5117]: Connection closed by 10.200.16.10 port 33996 May 27 17:06:37.470944 sshd-session[5115]: pam_unix(sshd:session): session closed for user core May 27 17:06:37.474956 systemd[1]: sshd@17-10.200.20.45:22-10.200.16.10:33996.service: Deactivated successfully. May 27 17:06:37.476520 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:06:37.477320 systemd-logind[1856]: Session 20 logged out. Waiting for processes to exit. May 27 17:06:37.478999 systemd-logind[1856]: Removed session 20. May 27 17:06:37.553634 systemd[1]: Started sshd@18-10.200.20.45:22-10.200.16.10:34004.service - OpenSSH per-connection server daemon (10.200.16.10:34004). May 27 17:06:38.000176 sshd[5129]: Accepted publickey for core from 10.200.16.10 port 34004 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:38.001566 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:38.005585 systemd-logind[1856]: New session 21 of user core. May 27 17:06:38.012520 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:06:38.378190 sshd[5131]: Connection closed by 10.200.16.10 port 34004 May 27 17:06:38.378857 sshd-session[5129]: pam_unix(sshd:session): session closed for user core May 27 17:06:38.382171 systemd[1]: sshd@18-10.200.20.45:22-10.200.16.10:34004.service: Deactivated successfully. May 27 17:06:38.384251 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:06:38.385002 systemd-logind[1856]: Session 21 logged out. Waiting for processes to exit. May 27 17:06:38.386441 systemd-logind[1856]: Removed session 21. May 27 17:06:43.461433 systemd[1]: Started sshd@19-10.200.20.45:22-10.200.16.10:54366.service - OpenSSH per-connection server daemon (10.200.16.10:54366). May 27 17:06:43.915763 sshd[5147]: Accepted publickey for core from 10.200.16.10 port 54366 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:43.917035 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:43.921435 systemd-logind[1856]: New session 22 of user core. May 27 17:06:43.928540 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:06:44.305417 sshd[5149]: Connection closed by 10.200.16.10 port 54366 May 27 17:06:44.306015 sshd-session[5147]: pam_unix(sshd:session): session closed for user core May 27 17:06:44.310063 systemd-logind[1856]: Session 22 logged out. Waiting for processes to exit. May 27 17:06:44.310326 systemd[1]: sshd@19-10.200.20.45:22-10.200.16.10:54366.service: Deactivated successfully. May 27 17:06:44.311955 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:06:44.314075 systemd-logind[1856]: Removed session 22. May 27 17:06:49.396634 systemd[1]: Started sshd@20-10.200.20.45:22-10.200.16.10:37828.service - OpenSSH per-connection server daemon (10.200.16.10:37828). May 27 17:06:49.878470 sshd[5160]: Accepted publickey for core from 10.200.16.10 port 37828 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:49.879679 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:49.883749 systemd-logind[1856]: New session 23 of user core. May 27 17:06:49.889551 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:06:50.266433 sshd[5162]: Connection closed by 10.200.16.10 port 37828 May 27 17:06:50.266212 sshd-session[5160]: pam_unix(sshd:session): session closed for user core May 27 17:06:50.270391 systemd-logind[1856]: Session 23 logged out. Waiting for processes to exit. May 27 17:06:50.270593 systemd[1]: sshd@20-10.200.20.45:22-10.200.16.10:37828.service: Deactivated successfully. May 27 17:06:50.272910 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:06:50.274962 systemd-logind[1856]: Removed session 23. May 27 17:06:50.355520 systemd[1]: Started sshd@21-10.200.20.45:22-10.200.16.10:37836.service - OpenSSH per-connection server daemon (10.200.16.10:37836). May 27 17:06:50.850430 sshd[5174]: Accepted publickey for core from 10.200.16.10 port 37836 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:50.851677 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:50.856177 systemd-logind[1856]: New session 24 of user core. May 27 17:06:50.863559 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:06:52.499384 containerd[1886]: time="2025-05-27T17:06:52.499127364Z" level=info msg="StopContainer for \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" with timeout 30 (s)" May 27 17:06:52.500569 containerd[1886]: time="2025-05-27T17:06:52.500408285Z" level=info msg="Stop container \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" with signal terminated" May 27 17:06:52.510249 containerd[1886]: time="2025-05-27T17:06:52.510081680Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:06:52.516905 containerd[1886]: time="2025-05-27T17:06:52.516673234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"5b28d6c50815cb9c7c727cf540e9bf735847ceb947e51dbeca083ae69118909f\" pid:5196 exited_at:{seconds:1748365612 nanos:515829447}" May 27 17:06:52.518308 systemd[1]: cri-containerd-9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d.scope: Deactivated successfully. May 27 17:06:52.520312 containerd[1886]: time="2025-05-27T17:06:52.520055741Z" level=info msg="StopContainer for \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" with timeout 2 (s)" May 27 17:06:52.520312 containerd[1886]: time="2025-05-27T17:06:52.520251555Z" level=info msg="received exit event container_id:\"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" id:\"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" pid:4211 exited_at:{seconds:1748365612 nanos:519843366}" May 27 17:06:52.520575 containerd[1886]: time="2025-05-27T17:06:52.520464050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" id:\"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" pid:4211 exited_at:{seconds:1748365612 nanos:519843366}" May 27 17:06:52.520741 containerd[1886]: time="2025-05-27T17:06:52.520722338Z" level=info msg="Stop container \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" with signal terminated" May 27 17:06:52.531199 systemd-networkd[1668]: lxc_health: Link DOWN May 27 17:06:52.531207 systemd-networkd[1668]: lxc_health: Lost carrier May 27 17:06:52.549570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d-rootfs.mount: Deactivated successfully. May 27 17:06:52.552403 systemd[1]: cri-containerd-c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f.scope: Deactivated successfully. May 27 17:06:52.553177 systemd[1]: cri-containerd-c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f.scope: Consumed 4.996s CPU time, 138M memory peak, 128K read from disk, 12.9M written to disk. May 27 17:06:52.554524 containerd[1886]: time="2025-05-27T17:06:52.554482907Z" level=info msg="received exit event container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" pid:4067 exited_at:{seconds:1748365612 nanos:553740564}" May 27 17:06:52.555357 containerd[1886]: time="2025-05-27T17:06:52.555142552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" id:\"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" pid:4067 exited_at:{seconds:1748365612 nanos:553740564}" May 27 17:06:52.572002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f-rootfs.mount: Deactivated successfully. May 27 17:06:52.668193 containerd[1886]: time="2025-05-27T17:06:52.668093062Z" level=info msg="StopContainer for \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" returns successfully" May 27 17:06:52.668893 containerd[1886]: time="2025-05-27T17:06:52.668862199Z" level=info msg="StopContainer for \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" returns successfully" May 27 17:06:52.669554 containerd[1886]: time="2025-05-27T17:06:52.669497515Z" level=info msg="StopPodSandbox for \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\"" May 27 17:06:52.669884 containerd[1886]: time="2025-05-27T17:06:52.669580230Z" level=info msg="Container to stop \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:06:52.669884 containerd[1886]: time="2025-05-27T17:06:52.669589966Z" level=info msg="Container to stop \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:06:52.669884 containerd[1886]: time="2025-05-27T17:06:52.669596302Z" level=info msg="Container to stop \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:06:52.669884 containerd[1886]: time="2025-05-27T17:06:52.669602398Z" level=info msg="Container to stop \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:06:52.669884 containerd[1886]: time="2025-05-27T17:06:52.669608054Z" level=info msg="Container to stop \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:06:52.669884 containerd[1886]: time="2025-05-27T17:06:52.669624911Z" level=info msg="StopPodSandbox for \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\"" May 27 17:06:52.669884 containerd[1886]: time="2025-05-27T17:06:52.669680641Z" level=info msg="Container to stop \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:06:52.675134 systemd[1]: cri-containerd-fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138.scope: Deactivated successfully. May 27 17:06:52.676433 systemd[1]: cri-containerd-bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f.scope: Deactivated successfully. May 27 17:06:52.678114 containerd[1886]: time="2025-05-27T17:06:52.677971496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" id:\"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" pid:3614 exit_status:137 exited_at:{seconds:1748365612 nanos:675219049}" May 27 17:06:52.697600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f-rootfs.mount: Deactivated successfully. May 27 17:06:52.707282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138-rootfs.mount: Deactivated successfully. May 27 17:06:52.727312 containerd[1886]: time="2025-05-27T17:06:52.727122738Z" level=info msg="shim disconnected" id=bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f namespace=k8s.io May 27 17:06:52.728641 containerd[1886]: time="2025-05-27T17:06:52.727372402Z" level=warning msg="cleaning up after shim disconnected" id=bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f namespace=k8s.io May 27 17:06:52.728641 containerd[1886]: time="2025-05-27T17:06:52.727406436Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:06:52.732965 containerd[1886]: time="2025-05-27T17:06:52.732916723Z" level=info msg="shim disconnected" id=fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138 namespace=k8s.io May 27 17:06:52.733189 containerd[1886]: time="2025-05-27T17:06:52.733045447Z" level=warning msg="cleaning up after shim disconnected" id=fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138 namespace=k8s.io May 27 17:06:52.733189 containerd[1886]: time="2025-05-27T17:06:52.733077888Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:06:52.740713 containerd[1886]: time="2025-05-27T17:06:52.740662689Z" level=info msg="received exit event sandbox_id:\"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" exit_status:137 exited_at:{seconds:1748365612 nanos:677823924}" May 27 17:06:52.741675 containerd[1886]: time="2025-05-27T17:06:52.741599015Z" level=info msg="TearDown network for sandbox \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" successfully" May 27 17:06:52.741675 containerd[1886]: time="2025-05-27T17:06:52.741626208Z" level=info msg="StopPodSandbox for \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" returns successfully" May 27 17:06:52.743449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f-shm.mount: Deactivated successfully. May 27 17:06:52.750514 containerd[1886]: time="2025-05-27T17:06:52.750260642Z" level=info msg="received exit event sandbox_id:\"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" exit_status:137 exited_at:{seconds:1748365612 nanos:675219049}" May 27 17:06:52.751576 containerd[1886]: time="2025-05-27T17:06:52.750840516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" id:\"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" pid:3659 exit_status:137 exited_at:{seconds:1748365612 nanos:677823924}" May 27 17:06:52.752572 containerd[1886]: time="2025-05-27T17:06:52.752487017Z" level=info msg="TearDown network for sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" successfully" May 27 17:06:52.752572 containerd[1886]: time="2025-05-27T17:06:52.752563067Z" level=info msg="StopPodSandbox for \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" returns successfully" May 27 17:06:52.931740 kubelet[3487]: I0527 17:06:52.931201 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9971166-7dc7-4ddb-ace6-63889aeb6c05-clustermesh-secrets\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.931740 kubelet[3487]: I0527 17:06:52.931245 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-bpf-maps\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.931740 kubelet[3487]: I0527 17:06:52.931255 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hostproc\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.931740 kubelet[3487]: I0527 17:06:52.931269 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-etc-cni-netd\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.931740 kubelet[3487]: I0527 17:06:52.931280 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jnn8\" (UniqueName: \"kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-kube-api-access-4jnn8\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.931740 kubelet[3487]: I0527 17:06:52.931292 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-cgroup\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932171 kubelet[3487]: I0527 17:06:52.931302 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-kernel\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932171 kubelet[3487]: I0527 17:06:52.931313 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-config-path\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932171 kubelet[3487]: I0527 17:06:52.931326 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12031f1b-69ed-43c7-a969-0b0a5630d402-cilium-config-path\") pod \"12031f1b-69ed-43c7-a969-0b0a5630d402\" (UID: \"12031f1b-69ed-43c7-a969-0b0a5630d402\") " May 27 17:06:52.932171 kubelet[3487]: I0527 17:06:52.931335 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-xtables-lock\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932171 kubelet[3487]: I0527 17:06:52.931371 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cni-path\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932171 kubelet[3487]: I0527 17:06:52.931382 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hubble-tls\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932264 kubelet[3487]: I0527 17:06:52.931391 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-net\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932264 kubelet[3487]: I0527 17:06:52.931400 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7gf5\" (UniqueName: \"kubernetes.io/projected/12031f1b-69ed-43c7-a969-0b0a5630d402-kube-api-access-b7gf5\") pod \"12031f1b-69ed-43c7-a969-0b0a5630d402\" (UID: \"12031f1b-69ed-43c7-a969-0b0a5630d402\") " May 27 17:06:52.932264 kubelet[3487]: I0527 17:06:52.931413 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-lib-modules\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932264 kubelet[3487]: I0527 17:06:52.931421 3487 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-run\") pod \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\" (UID: \"d9971166-7dc7-4ddb-ace6-63889aeb6c05\") " May 27 17:06:52.932264 kubelet[3487]: I0527 17:06:52.931467 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.932264 kubelet[3487]: I0527 17:06:52.931505 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.932380 kubelet[3487]: I0527 17:06:52.931515 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hostproc" (OuterVolumeSpecName: "hostproc") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.932380 kubelet[3487]: I0527 17:06:52.931524 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.933360 kubelet[3487]: I0527 17:06:52.933010 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.933360 kubelet[3487]: I0527 17:06:52.933066 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.933552 kubelet[3487]: I0527 17:06:52.933522 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.935642 kubelet[3487]: I0527 17:06:52.935614 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:06:52.935811 kubelet[3487]: I0527 17:06:52.935797 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.935936 kubelet[3487]: I0527 17:06:52.935919 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9971166-7dc7-4ddb-ace6-63889aeb6c05-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:06:52.936017 kubelet[3487]: I0527 17:06:52.936006 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.937058 kubelet[3487]: I0527 17:06:52.937024 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cni-path" (OuterVolumeSpecName: "cni-path") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:06:52.937127 kubelet[3487]: I0527 17:06:52.937039 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:06:52.937232 kubelet[3487]: I0527 17:06:52.937214 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-kube-api-access-4jnn8" (OuterVolumeSpecName: "kube-api-access-4jnn8") pod "d9971166-7dc7-4ddb-ace6-63889aeb6c05" (UID: "d9971166-7dc7-4ddb-ace6-63889aeb6c05"). InnerVolumeSpecName "kube-api-access-4jnn8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:06:52.937726 kubelet[3487]: I0527 17:06:52.937706 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12031f1b-69ed-43c7-a969-0b0a5630d402-kube-api-access-b7gf5" (OuterVolumeSpecName: "kube-api-access-b7gf5") pod "12031f1b-69ed-43c7-a969-0b0a5630d402" (UID: "12031f1b-69ed-43c7-a969-0b0a5630d402"). InnerVolumeSpecName "kube-api-access-b7gf5". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:06:52.938052 kubelet[3487]: I0527 17:06:52.938024 3487 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12031f1b-69ed-43c7-a969-0b0a5630d402-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12031f1b-69ed-43c7-a969-0b0a5630d402" (UID: "12031f1b-69ed-43c7-a969-0b0a5630d402"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:06:53.031971 kubelet[3487]: I0527 17:06:53.031931 3487 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hubble-tls\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.031971 kubelet[3487]: I0527 17:06:53.031972 3487 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-net\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.031971 kubelet[3487]: I0527 17:06:53.031981 3487 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b7gf5\" (UniqueName: \"kubernetes.io/projected/12031f1b-69ed-43c7-a969-0b0a5630d402-kube-api-access-b7gf5\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.031992 3487 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-lib-modules\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.031998 3487 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-run\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.032004 3487 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9971166-7dc7-4ddb-ace6-63889aeb6c05-clustermesh-secrets\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.032010 3487 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-bpf-maps\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.032015 3487 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-hostproc\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.032021 3487 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-etc-cni-netd\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.032026 3487 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jnn8\" (UniqueName: \"kubernetes.io/projected/d9971166-7dc7-4ddb-ace6-63889aeb6c05-kube-api-access-4jnn8\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032181 kubelet[3487]: I0527 17:06:53.032035 3487 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-cgroup\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032298 kubelet[3487]: I0527 17:06:53.032041 3487 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-host-proc-sys-kernel\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032298 kubelet[3487]: I0527 17:06:53.032047 3487 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cilium-config-path\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032298 kubelet[3487]: I0527 17:06:53.032054 3487 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12031f1b-69ed-43c7-a969-0b0a5630d402-cilium-config-path\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032298 kubelet[3487]: I0527 17:06:53.032059 3487 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-xtables-lock\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.032298 kubelet[3487]: I0527 17:06:53.032064 3487 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9971166-7dc7-4ddb-ace6-63889aeb6c05-cni-path\") on node \"ci-4344.0.0-a-efe79b1159\" DevicePath \"\"" May 27 17:06:53.169575 kubelet[3487]: I0527 17:06:53.169539 3487 scope.go:117] "RemoveContainer" containerID="9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d" May 27 17:06:53.174938 systemd[1]: Removed slice kubepods-besteffort-pod12031f1b_69ed_43c7_a969_0b0a5630d402.slice - libcontainer container kubepods-besteffort-pod12031f1b_69ed_43c7_a969_0b0a5630d402.slice. May 27 17:06:53.176378 containerd[1886]: time="2025-05-27T17:06:53.176264455Z" level=info msg="RemoveContainer for \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\"" May 27 17:06:53.191941 systemd[1]: Removed slice kubepods-burstable-podd9971166_7dc7_4ddb_ace6_63889aeb6c05.slice - libcontainer container kubepods-burstable-podd9971166_7dc7_4ddb_ace6_63889aeb6c05.slice. May 27 17:06:53.192144 systemd[1]: kubepods-burstable-podd9971166_7dc7_4ddb_ace6_63889aeb6c05.slice: Consumed 5.062s CPU time, 138.4M memory peak, 128K read from disk, 12.9M written to disk. May 27 17:06:53.195825 containerd[1886]: time="2025-05-27T17:06:53.195782092Z" level=info msg="RemoveContainer for \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" returns successfully" May 27 17:06:53.196680 kubelet[3487]: I0527 17:06:53.196660 3487 scope.go:117] "RemoveContainer" containerID="9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d" May 27 17:06:53.201334 containerd[1886]: time="2025-05-27T17:06:53.201287995Z" level=error msg="ContainerStatus for \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\": not found" May 27 17:06:53.206513 kubelet[3487]: E0527 17:06:53.202580 3487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\": not found" containerID="9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d" May 27 17:06:53.207883 kubelet[3487]: I0527 17:06:53.207763 3487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d"} err="failed to get container status \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e1abfc5bd9033ed6fe069780926c29e09038ebf6666019adc6d06a68fffca4d\": not found" May 27 17:06:53.207883 kubelet[3487]: I0527 17:06:53.207822 3487 scope.go:117] "RemoveContainer" containerID="c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f" May 27 17:06:53.225734 containerd[1886]: time="2025-05-27T17:06:53.223803838Z" level=info msg="RemoveContainer for \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\"" May 27 17:06:53.255124 containerd[1886]: time="2025-05-27T17:06:53.255066224Z" level=info msg="RemoveContainer for \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" returns successfully" May 27 17:06:53.255603 kubelet[3487]: I0527 17:06:53.255578 3487 scope.go:117] "RemoveContainer" containerID="cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8" May 27 17:06:53.257374 containerd[1886]: time="2025-05-27T17:06:53.257324712Z" level=info msg="RemoveContainer for \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\"" May 27 17:06:53.270796 containerd[1886]: time="2025-05-27T17:06:53.270752195Z" level=info msg="RemoveContainer for \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" returns successfully" May 27 17:06:53.271300 kubelet[3487]: I0527 17:06:53.271184 3487 scope.go:117] "RemoveContainer" containerID="4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff" May 27 17:06:53.273325 containerd[1886]: time="2025-05-27T17:06:53.273290587Z" level=info msg="RemoveContainer for \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\"" May 27 17:06:53.293952 containerd[1886]: time="2025-05-27T17:06:53.293833312Z" level=info msg="RemoveContainer for \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" returns successfully" May 27 17:06:53.294640 kubelet[3487]: I0527 17:06:53.294600 3487 scope.go:117] "RemoveContainer" containerID="15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a" May 27 17:06:53.296101 containerd[1886]: time="2025-05-27T17:06:53.296069631Z" level=info msg="RemoveContainer for \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\"" May 27 17:06:53.307711 containerd[1886]: time="2025-05-27T17:06:53.307670512Z" level=info msg="RemoveContainer for \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" returns successfully" May 27 17:06:53.308047 kubelet[3487]: I0527 17:06:53.308024 3487 scope.go:117] "RemoveContainer" containerID="7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0" May 27 17:06:53.309941 containerd[1886]: time="2025-05-27T17:06:53.309887134Z" level=info msg="RemoveContainer for \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\"" May 27 17:06:53.328730 containerd[1886]: time="2025-05-27T17:06:53.328663883Z" level=info msg="RemoveContainer for \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" returns successfully" May 27 17:06:53.329224 kubelet[3487]: I0527 17:06:53.328990 3487 scope.go:117] "RemoveContainer" containerID="c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f" May 27 17:06:53.329563 containerd[1886]: time="2025-05-27T17:06:53.329459164Z" level=error msg="ContainerStatus for \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\": not found" May 27 17:06:53.329647 kubelet[3487]: E0527 17:06:53.329625 3487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\": not found" containerID="c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f" May 27 17:06:53.329897 kubelet[3487]: I0527 17:06:53.329661 3487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f"} err="failed to get container status \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9007a9746310d5230714d993b4c76ab3e2e3afcc2ea0884f6296a6b67fceb6f\": not found" May 27 17:06:53.329897 kubelet[3487]: I0527 17:06:53.329681 3487 scope.go:117] "RemoveContainer" containerID="cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8" May 27 17:06:53.330208 containerd[1886]: time="2025-05-27T17:06:53.329882058Z" level=error msg="ContainerStatus for \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\": not found" May 27 17:06:53.330267 kubelet[3487]: E0527 17:06:53.330101 3487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\": not found" containerID="cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8" May 27 17:06:53.330267 kubelet[3487]: I0527 17:06:53.330127 3487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8"} err="failed to get container status \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf0a71876cad9625dd2d3f90aadd9d782b398be80d50650dfbf3692762ae2fc8\": not found" May 27 17:06:53.330267 kubelet[3487]: I0527 17:06:53.330146 3487 scope.go:117] "RemoveContainer" containerID="4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff" May 27 17:06:53.330497 containerd[1886]: time="2025-05-27T17:06:53.330444844Z" level=error msg="ContainerStatus for \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\": not found" May 27 17:06:53.330722 kubelet[3487]: E0527 17:06:53.330651 3487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\": not found" containerID="4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff" May 27 17:06:53.330765 kubelet[3487]: I0527 17:06:53.330728 3487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff"} err="failed to get container status \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"4eb47812b9e6f5aae03f0e50940b1e8238d4e0dffc3b5366fba35679f6b311ff\": not found" May 27 17:06:53.330765 kubelet[3487]: I0527 17:06:53.330743 3487 scope.go:117] "RemoveContainer" containerID="15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a" May 27 17:06:53.330919 containerd[1886]: time="2025-05-27T17:06:53.330889922Z" level=error msg="ContainerStatus for \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\": not found" May 27 17:06:53.331021 kubelet[3487]: E0527 17:06:53.331001 3487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\": not found" containerID="15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a" May 27 17:06:53.331067 kubelet[3487]: I0527 17:06:53.331022 3487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a"} err="failed to get container status \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\": rpc error: code = NotFound desc = an error occurred when try to find container \"15b59d61c4ed9d0fbd84379eed4c23def617e60729d42485a187320be770e30a\": not found" May 27 17:06:53.331067 kubelet[3487]: I0527 17:06:53.331035 3487 scope.go:117] "RemoveContainer" containerID="7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0" May 27 17:06:53.331259 containerd[1886]: time="2025-05-27T17:06:53.331211220Z" level=error msg="ContainerStatus for \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\": not found" May 27 17:06:53.331432 kubelet[3487]: E0527 17:06:53.331413 3487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\": not found" containerID="7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0" May 27 17:06:53.331558 kubelet[3487]: I0527 17:06:53.331540 3487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0"} err="failed to get container status \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cc56b330168f4a1dc77313e1fe4764851e883c8660abf0e7ae71d95d761dae0\": not found" May 27 17:06:53.549391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138-shm.mount: Deactivated successfully. May 27 17:06:53.549865 systemd[1]: var-lib-kubelet-pods-12031f1b\x2d69ed\x2d43c7\x2da969\x2d0b0a5630d402-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7gf5.mount: Deactivated successfully. May 27 17:06:53.549910 systemd[1]: var-lib-kubelet-pods-d9971166\x2d7dc7\x2d4ddb\x2dace6\x2d63889aeb6c05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jnn8.mount: Deactivated successfully. May 27 17:06:53.549950 systemd[1]: var-lib-kubelet-pods-d9971166\x2d7dc7\x2d4ddb\x2dace6\x2d63889aeb6c05-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 17:06:53.549984 systemd[1]: var-lib-kubelet-pods-d9971166\x2d7dc7\x2d4ddb\x2dace6\x2d63889aeb6c05-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 17:06:53.661743 kubelet[3487]: I0527 17:06:53.661671 3487 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12031f1b-69ed-43c7-a969-0b0a5630d402" path="/var/lib/kubelet/pods/12031f1b-69ed-43c7-a969-0b0a5630d402/volumes" May 27 17:06:53.662272 kubelet[3487]: I0527 17:06:53.662243 3487 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9971166-7dc7-4ddb-ace6-63889aeb6c05" path="/var/lib/kubelet/pods/d9971166-7dc7-4ddb-ace6-63889aeb6c05/volumes" May 27 17:06:53.762725 kubelet[3487]: E0527 17:06:53.762684 3487 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:06:54.501933 sshd[5176]: Connection closed by 10.200.16.10 port 37836 May 27 17:06:54.501332 sshd-session[5174]: pam_unix(sshd:session): session closed for user core May 27 17:06:54.504822 systemd-logind[1856]: Session 24 logged out. Waiting for processes to exit. May 27 17:06:54.505947 systemd[1]: sshd@21-10.200.20.45:22-10.200.16.10:37836.service: Deactivated successfully. May 27 17:06:54.508184 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:06:54.509945 systemd-logind[1856]: Removed session 24. May 27 17:06:54.604990 systemd[1]: Started sshd@22-10.200.20.45:22-10.200.16.10:37850.service - OpenSSH per-connection server daemon (10.200.16.10:37850). May 27 17:06:55.085080 sshd[5327]: Accepted publickey for core from 10.200.16.10 port 37850 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:55.086527 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:55.091149 systemd-logind[1856]: New session 25 of user core. May 27 17:06:55.096547 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:06:55.804005 systemd[1]: Created slice kubepods-burstable-podea5d65f6_00ed_4fe3_af2a_652a731a92fb.slice - libcontainer container kubepods-burstable-podea5d65f6_00ed_4fe3_af2a_652a731a92fb.slice. May 27 17:06:55.837862 sshd[5329]: Connection closed by 10.200.16.10 port 37850 May 27 17:06:55.838802 sshd-session[5327]: pam_unix(sshd:session): session closed for user core May 27 17:06:55.845293 systemd[1]: sshd@22-10.200.20.45:22-10.200.16.10:37850.service: Deactivated successfully. May 27 17:06:55.851094 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:06:55.853071 systemd-logind[1856]: Session 25 logged out. Waiting for processes to exit. May 27 17:06:55.855091 systemd-logind[1856]: Removed session 25. May 27 17:06:55.924933 systemd[1]: Started sshd@23-10.200.20.45:22-10.200.16.10:37858.service - OpenSSH per-connection server daemon (10.200.16.10:37858). May 27 17:06:55.944082 kubelet[3487]: I0527 17:06:55.944031 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-etc-cni-netd\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944082 kubelet[3487]: I0527 17:06:55.944072 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-lib-modules\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944082 kubelet[3487]: I0527 17:06:55.944089 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-hostproc\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944493 kubelet[3487]: I0527 17:06:55.944100 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-cni-path\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944493 kubelet[3487]: I0527 17:06:55.944113 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-host-proc-sys-kernel\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944493 kubelet[3487]: I0527 17:06:55.944122 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-host-proc-sys-net\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944493 kubelet[3487]: I0527 17:06:55.944132 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-cilium-ipsec-secrets\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944493 kubelet[3487]: I0527 17:06:55.944142 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-hubble-tls\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944493 kubelet[3487]: I0527 17:06:55.944153 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-cilium-run\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944585 kubelet[3487]: I0527 17:06:55.944161 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-bpf-maps\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944585 kubelet[3487]: I0527 17:06:55.944170 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-xtables-lock\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944585 kubelet[3487]: I0527 17:06:55.944178 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-cilium-config-path\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944585 kubelet[3487]: I0527 17:06:55.944189 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-cilium-cgroup\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944585 kubelet[3487]: I0527 17:06:55.944198 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hclsb\" (UniqueName: \"kubernetes.io/projected/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-kube-api-access-hclsb\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:55.944585 kubelet[3487]: I0527 17:06:55.944208 3487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea5d65f6-00ed-4fe3-af2a-652a731a92fb-clustermesh-secrets\") pod \"cilium-pt5bn\" (UID: \"ea5d65f6-00ed-4fe3-af2a-652a731a92fb\") " pod="kube-system/cilium-pt5bn" May 27 17:06:56.108269 containerd[1886]: time="2025-05-27T17:06:56.108149185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pt5bn,Uid:ea5d65f6-00ed-4fe3-af2a-652a731a92fb,Namespace:kube-system,Attempt:0,}" May 27 17:06:56.168586 containerd[1886]: time="2025-05-27T17:06:56.168253456Z" level=info msg="connecting to shim 2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc" address="unix:///run/containerd/s/5a8d518375380df2464d8dc050a7b748a10f8653008159c5c5d96e7e06aea1ba" namespace=k8s.io protocol=ttrpc version=3 May 27 17:06:56.186530 systemd[1]: Started cri-containerd-2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc.scope - libcontainer container 2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc. May 27 17:06:56.214472 containerd[1886]: time="2025-05-27T17:06:56.214427043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pt5bn,Uid:ea5d65f6-00ed-4fe3-af2a-652a731a92fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\"" May 27 17:06:56.222672 containerd[1886]: time="2025-05-27T17:06:56.222629760Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:06:56.258193 containerd[1886]: time="2025-05-27T17:06:56.258149697Z" level=info msg="Container 2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873: CDI devices from CRI Config.CDIDevices: []" May 27 17:06:56.289768 containerd[1886]: time="2025-05-27T17:06:56.289703068Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873\"" May 27 17:06:56.290353 containerd[1886]: time="2025-05-27T17:06:56.290301615Z" level=info msg="StartContainer for \"2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873\"" May 27 17:06:56.291187 containerd[1886]: time="2025-05-27T17:06:56.291150658Z" level=info msg="connecting to shim 2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873" address="unix:///run/containerd/s/5a8d518375380df2464d8dc050a7b748a10f8653008159c5c5d96e7e06aea1ba" protocol=ttrpc version=3 May 27 17:06:56.310522 systemd[1]: Started cri-containerd-2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873.scope - libcontainer container 2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873. May 27 17:06:56.342112 containerd[1886]: time="2025-05-27T17:06:56.341111182Z" level=info msg="StartContainer for \"2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873\" returns successfully" May 27 17:06:56.343516 systemd[1]: cri-containerd-2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873.scope: Deactivated successfully. May 27 17:06:56.346129 containerd[1886]: time="2025-05-27T17:06:56.345919439Z" level=info msg="received exit event container_id:\"2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873\" id:\"2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873\" pid:5404 exited_at:{seconds:1748365616 nanos:345531251}" May 27 17:06:56.346600 containerd[1886]: time="2025-05-27T17:06:56.346568524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873\" id:\"2b7bf89326d8d1848882ff505112a4450641c90830d9379b96a02f2a89387873\" pid:5404 exited_at:{seconds:1748365616 nanos:345531251}" May 27 17:06:56.419838 sshd[5340]: Accepted publickey for core from 10.200.16.10 port 37858 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:56.420898 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:56.425218 systemd-logind[1856]: New session 26 of user core. May 27 17:06:56.432572 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:06:56.766955 sshd[5434]: Connection closed by 10.200.16.10 port 37858 May 27 17:06:56.767629 sshd-session[5340]: pam_unix(sshd:session): session closed for user core May 27 17:06:56.771095 systemd[1]: sshd@23-10.200.20.45:22-10.200.16.10:37858.service: Deactivated successfully. May 27 17:06:56.773940 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:06:56.776154 systemd-logind[1856]: Session 26 logged out. Waiting for processes to exit. May 27 17:06:56.777766 systemd-logind[1856]: Removed session 26. May 27 17:06:56.851625 systemd[1]: Started sshd@24-10.200.20.45:22-10.200.16.10:37874.service - OpenSSH per-connection server daemon (10.200.16.10:37874). May 27 17:06:57.209371 containerd[1886]: time="2025-05-27T17:06:57.209277194Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:06:57.241321 containerd[1886]: time="2025-05-27T17:06:57.241269195Z" level=info msg="Container 2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a: CDI devices from CRI Config.CDIDevices: []" May 27 17:06:57.270317 containerd[1886]: time="2025-05-27T17:06:57.270248756Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a\"" May 27 17:06:57.270937 containerd[1886]: time="2025-05-27T17:06:57.270854991Z" level=info msg="StartContainer for \"2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a\"" May 27 17:06:57.273319 containerd[1886]: time="2025-05-27T17:06:57.273234067Z" level=info msg="connecting to shim 2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a" address="unix:///run/containerd/s/5a8d518375380df2464d8dc050a7b748a10f8653008159c5c5d96e7e06aea1ba" protocol=ttrpc version=3 May 27 17:06:57.290538 systemd[1]: Started cri-containerd-2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a.scope - libcontainer container 2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a. May 27 17:06:57.307815 sshd[5441]: Accepted publickey for core from 10.200.16.10 port 37874 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:57.311075 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:57.317278 systemd-logind[1856]: New session 27 of user core. May 27 17:06:57.321606 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 17:06:57.326619 systemd[1]: cri-containerd-2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a.scope: Deactivated successfully. May 27 17:06:57.330171 containerd[1886]: time="2025-05-27T17:06:57.329572866Z" level=info msg="received exit event container_id:\"2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a\" id:\"2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a\" pid:5455 exited_at:{seconds:1748365617 nanos:328825946}" May 27 17:06:57.330171 containerd[1886]: time="2025-05-27T17:06:57.329570297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a\" id:\"2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a\" pid:5455 exited_at:{seconds:1748365617 nanos:328825946}" May 27 17:06:57.330511 containerd[1886]: time="2025-05-27T17:06:57.330474310Z" level=info msg="StartContainer for \"2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a\" returns successfully" May 27 17:06:57.348584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2eaac82cefef4f959d8a1343a53976a4e5648d7c4e3338b52382bf7b5277c12a-rootfs.mount: Deactivated successfully. May 27 17:06:57.614372 kubelet[3487]: I0527 17:06:57.613950 3487 setters.go:618] "Node became not ready" node="ci-4344.0.0-a-efe79b1159" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T17:06:57Z","lastTransitionTime":"2025-05-27T17:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 17:06:58.213701 containerd[1886]: time="2025-05-27T17:06:58.213320749Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:06:58.263262 containerd[1886]: time="2025-05-27T17:06:58.262368308Z" level=info msg="Container c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36: CDI devices from CRI Config.CDIDevices: []" May 27 17:06:58.283617 containerd[1886]: time="2025-05-27T17:06:58.283573350Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36\"" May 27 17:06:58.284386 containerd[1886]: time="2025-05-27T17:06:58.284358127Z" level=info msg="StartContainer for \"c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36\"" May 27 17:06:58.286494 containerd[1886]: time="2025-05-27T17:06:58.286051748Z" level=info msg="connecting to shim c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36" address="unix:///run/containerd/s/5a8d518375380df2464d8dc050a7b748a10f8653008159c5c5d96e7e06aea1ba" protocol=ttrpc version=3 May 27 17:06:58.303596 systemd[1]: Started cri-containerd-c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36.scope - libcontainer container c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36. May 27 17:06:58.333020 systemd[1]: cri-containerd-c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36.scope: Deactivated successfully. May 27 17:06:58.335655 containerd[1886]: time="2025-05-27T17:06:58.335595515Z" level=info msg="received exit event container_id:\"c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36\" id:\"c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36\" pid:5505 exited_at:{seconds:1748365618 nanos:335060130}" May 27 17:06:58.336139 containerd[1886]: time="2025-05-27T17:06:58.336117084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36\" id:\"c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36\" pid:5505 exited_at:{seconds:1748365618 nanos:335060130}" May 27 17:06:58.337171 containerd[1886]: time="2025-05-27T17:06:58.337145468Z" level=info msg="StartContainer for \"c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36\" returns successfully" May 27 17:06:58.355731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c80ea451e7f4b1ded2091f87730678ad410c402a1f788f4cd346df7598696d36-rootfs.mount: Deactivated successfully. May 27 17:06:58.763964 kubelet[3487]: E0527 17:06:58.763901 3487 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:06:59.218455 containerd[1886]: time="2025-05-27T17:06:59.218333638Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:06:59.265019 containerd[1886]: time="2025-05-27T17:06:59.264975296Z" level=info msg="Container 1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd: CDI devices from CRI Config.CDIDevices: []" May 27 17:06:59.287837 containerd[1886]: time="2025-05-27T17:06:59.287673786Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd\"" May 27 17:06:59.289750 containerd[1886]: time="2025-05-27T17:06:59.289643016Z" level=info msg="StartContainer for \"1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd\"" May 27 17:06:59.291529 containerd[1886]: time="2025-05-27T17:06:59.291491475Z" level=info msg="connecting to shim 1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd" address="unix:///run/containerd/s/5a8d518375380df2464d8dc050a7b748a10f8653008159c5c5d96e7e06aea1ba" protocol=ttrpc version=3 May 27 17:06:59.312536 systemd[1]: Started cri-containerd-1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd.scope - libcontainer container 1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd. May 27 17:06:59.334463 systemd[1]: cri-containerd-1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd.scope: Deactivated successfully. May 27 17:06:59.336907 containerd[1886]: time="2025-05-27T17:06:59.336851189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd\" id:\"1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd\" pid:5545 exited_at:{seconds:1748365619 nanos:336290939}" May 27 17:06:59.341952 containerd[1886]: time="2025-05-27T17:06:59.341917958Z" level=info msg="received exit event container_id:\"1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd\" id:\"1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd\" pid:5545 exited_at:{seconds:1748365619 nanos:336290939}" May 27 17:06:59.343483 containerd[1886]: time="2025-05-27T17:06:59.343447943Z" level=info msg="StartContainer for \"1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd\" returns successfully" May 27 17:06:59.358956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d903e2ef52455d4f368adea0aac5b8b286d5be6caec16c1bbd92dbd9fdc4bbd-rootfs.mount: Deactivated successfully. May 27 17:07:00.225042 containerd[1886]: time="2025-05-27T17:07:00.224962354Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:07:00.266929 containerd[1886]: time="2025-05-27T17:07:00.266489981Z" level=info msg="Container 375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16: CDI devices from CRI Config.CDIDevices: []" May 27 17:07:00.291747 containerd[1886]: time="2025-05-27T17:07:00.291700329Z" level=info msg="CreateContainer within sandbox \"2c838fc17815bc35d7352e0d01891e0394c826a04256e131ab03fb973efec2cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\"" May 27 17:07:00.292759 containerd[1886]: time="2025-05-27T17:07:00.292732267Z" level=info msg="StartContainer for \"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\"" May 27 17:07:00.294837 containerd[1886]: time="2025-05-27T17:07:00.294800255Z" level=info msg="connecting to shim 375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16" address="unix:///run/containerd/s/5a8d518375380df2464d8dc050a7b748a10f8653008159c5c5d96e7e06aea1ba" protocol=ttrpc version=3 May 27 17:07:00.318533 systemd[1]: Started cri-containerd-375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16.scope - libcontainer container 375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16. May 27 17:07:00.350020 containerd[1886]: time="2025-05-27T17:07:00.349972755Z" level=info msg="StartContainer for \"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\" returns successfully" May 27 17:07:00.412988 containerd[1886]: time="2025-05-27T17:07:00.412945286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\" id:\"57914d5bf5925782b379b6d90ac40eda540fb8735207e07fd2536145f704a9ca\" pid:5613 exited_at:{seconds:1748365620 nanos:412659813}" May 27 17:07:00.756407 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 27 17:07:01.718558 containerd[1886]: time="2025-05-27T17:07:01.718507796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\" id:\"c05aac0f0d08048aa64a35e1b532c6d2513bc220e502c2f62db731ec65f6ab25\" pid:5688 exit_status:1 exited_at:{seconds:1748365621 nanos:717916392}" May 27 17:07:03.287106 systemd-networkd[1668]: lxc_health: Link UP May 27 17:07:03.295032 systemd-networkd[1668]: lxc_health: Gained carrier May 27 17:07:03.664166 containerd[1886]: time="2025-05-27T17:07:03.663942404Z" level=info msg="StopPodSandbox for \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\"" May 27 17:07:03.664166 containerd[1886]: time="2025-05-27T17:07:03.664079608Z" level=info msg="TearDown network for sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" successfully" May 27 17:07:03.664166 containerd[1886]: time="2025-05-27T17:07:03.664089545Z" level=info msg="StopPodSandbox for \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" returns successfully" May 27 17:07:03.664981 containerd[1886]: time="2025-05-27T17:07:03.664951933Z" level=info msg="RemovePodSandbox for \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\"" May 27 17:07:03.664981 containerd[1886]: time="2025-05-27T17:07:03.664981574Z" level=info msg="Forcibly stopping sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\"" May 27 17:07:03.665076 containerd[1886]: time="2025-05-27T17:07:03.665069425Z" level=info msg="TearDown network for sandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" successfully" May 27 17:07:03.666023 containerd[1886]: time="2025-05-27T17:07:03.665989839Z" level=info msg="Ensure that sandbox fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138 in task-service has been cleanup successfully" May 27 17:07:03.699829 containerd[1886]: time="2025-05-27T17:07:03.699776053Z" level=info msg="RemovePodSandbox \"fa6d771acb0a3058ca2ae50031d3dd0feb360441b4eb8bf283f5a56823b50138\" returns successfully" May 27 17:07:03.700454 containerd[1886]: time="2025-05-27T17:07:03.700422642Z" level=info msg="StopPodSandbox for \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\"" May 27 17:07:03.700608 containerd[1886]: time="2025-05-27T17:07:03.700544910Z" level=info msg="TearDown network for sandbox \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" successfully" May 27 17:07:03.700608 containerd[1886]: time="2025-05-27T17:07:03.700556446Z" level=info msg="StopPodSandbox for \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" returns successfully" May 27 17:07:03.701423 containerd[1886]: time="2025-05-27T17:07:03.701065759Z" level=info msg="RemovePodSandbox for \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\"" May 27 17:07:03.701562 containerd[1886]: time="2025-05-27T17:07:03.701544214Z" level=info msg="Forcibly stopping sandbox \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\"" May 27 17:07:03.701706 containerd[1886]: time="2025-05-27T17:07:03.701689267Z" level=info msg="TearDown network for sandbox \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" successfully" May 27 17:07:03.703383 containerd[1886]: time="2025-05-27T17:07:03.703301968Z" level=info msg="Ensure that sandbox bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f in task-service has been cleanup successfully" May 27 17:07:03.724217 containerd[1886]: time="2025-05-27T17:07:03.724170606Z" level=info msg="RemovePodSandbox \"bd2cebf1782af97463c3cdedfd508abb50d53a7d6478eaf27715819e521c0e6f\" returns successfully" May 27 17:07:03.863815 containerd[1886]: time="2025-05-27T17:07:03.863769765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\" id:\"b214b25878a99d5f52d1f6c635e2a179a0ebf7c640404a422fc9dfaf27cac5ca\" pid:6134 exited_at:{seconds:1748365623 nanos:861886992}" May 27 17:07:04.219117 kubelet[3487]: I0527 17:07:04.218902 3487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pt5bn" podStartSLOduration=9.218886602 podStartE2EDuration="9.218886602s" podCreationTimestamp="2025-05-27 17:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:07:01.237813652 +0000 UTC m=+237.732837504" watchObservedRunningTime="2025-05-27 17:07:04.218886602 +0000 UTC m=+240.713910454" May 27 17:07:04.539573 systemd-networkd[1668]: lxc_health: Gained IPv6LL May 27 17:07:05.963606 containerd[1886]: time="2025-05-27T17:07:05.963556090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\" id:\"2859d4ea072dc9cb5321a824622490f86b855ba62f0c81834afff9e86a301070\" pid:6170 exited_at:{seconds:1748365625 nanos:962734319}" May 27 17:07:08.060923 containerd[1886]: time="2025-05-27T17:07:08.060823181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\" id:\"907df61e00978c92abdb5b4ae7f9e486082ace138e642ca75912b87a51ef49e1\" pid:6199 exited_at:{seconds:1748365628 nanos:60507283}" May 27 17:07:10.143610 containerd[1886]: time="2025-05-27T17:07:10.143567789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"375a885b585fb0a8040703a993e0f88b7f856d246bee4f6a7f16fba6bde1be16\" id:\"bc243850e48af8c289e1377b1e9161d08221c2eb0edec6f66a02531a6f3322d1\" pid:6221 exited_at:{seconds:1748365630 nanos:143255531}" May 27 17:07:10.240724 sshd[5473]: Connection closed by 10.200.16.10 port 37874 May 27 17:07:10.239880 sshd-session[5441]: pam_unix(sshd:session): session closed for user core May 27 17:07:10.243427 systemd-logind[1856]: Session 27 logged out. Waiting for processes to exit. May 27 17:07:10.245070 systemd[1]: sshd@24-10.200.20.45:22-10.200.16.10:37874.service: Deactivated successfully. May 27 17:07:10.248268 systemd[1]: session-27.scope: Deactivated successfully. May 27 17:07:10.250579 systemd-logind[1856]: Removed session 27.