May 13 23:57:30.054254 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:57:30.054281 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:30.054293 kernel: BIOS-provided physical RAM map: May 13 23:57:30.054302 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:57:30.054308 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 13 23:57:30.054315 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 13 23:57:30.054325 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 13 23:57:30.054331 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 13 23:57:30.054341 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 13 23:57:30.054349 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 13 23:57:30.054355 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 13 23:57:30.054364 kernel: printk: bootconsole [earlyser0] enabled May 13 23:57:30.054371 kernel: NX (Execute Disable) protection: active May 13 23:57:30.054378 kernel: APIC: Static calls initialized May 13 23:57:30.054389 kernel: efi: EFI v2.7 by Microsoft May 13 23:57:30.054398 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 May 13 23:57:30.054405 kernel: random: crng init done May 13 23:57:30.054415 kernel: secureboot: Secure boot disabled May 13 23:57:30.054421 kernel: SMBIOS 3.1.0 present. May 13 23:57:30.054432 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 13 23:57:30.054439 kernel: Hypervisor detected: Microsoft Hyper-V May 13 23:57:30.054446 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 13 23:57:30.054454 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 May 13 23:57:30.054463 kernel: Hyper-V: Nested features: 0x1e0101 May 13 23:57:30.054470 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 13 23:57:30.054482 kernel: Hyper-V: Using hypercall for remote TLB flush May 13 23:57:30.054489 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 13 23:57:30.054498 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 13 23:57:30.054507 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 13 23:57:30.054516 kernel: tsc: Detected 2593.905 MHz processor May 13 23:57:30.054524 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:57:30.054534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:57:30.054541 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 13 23:57:30.054551 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:57:30.054561 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:57:30.054571 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 13 23:57:30.054578 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 13 23:57:30.054587 kernel: Using GB pages for direct mapping May 13 23:57:30.054595 kernel: ACPI: Early table checksum verification disabled May 13 23:57:30.054602 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 13 23:57:30.054616 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054628 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054639 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 13 23:57:30.054649 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 13 23:57:30.054677 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054686 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054693 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054701 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054712 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054724 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054737 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:30.054746 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 13 23:57:30.054754 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 13 23:57:30.054768 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 13 23:57:30.054784 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 13 23:57:30.054798 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 13 23:57:30.054808 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 13 23:57:30.054821 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 13 23:57:30.054836 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 13 23:57:30.054849 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 13 23:57:30.054857 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 13 23:57:30.054867 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:57:30.054884 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:57:30.054899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 13 23:57:30.054913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 13 23:57:30.054935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 13 23:57:30.054949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 13 23:57:30.054964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 13 23:57:30.054978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 13 23:57:30.054987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 13 23:57:30.054997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 13 23:57:30.055012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 13 23:57:30.055022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 13 23:57:30.055029 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 13 23:57:30.055051 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 13 23:57:30.055065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 13 23:57:30.055072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 13 23:57:30.055083 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 13 23:57:30.055104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 13 23:57:30.055120 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 13 23:57:30.055128 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 13 23:57:30.055136 kernel: Zone ranges: May 13 23:57:30.055151 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:57:30.055175 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 23:57:30.055188 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 13 23:57:30.055196 kernel: Movable zone start for each node May 13 23:57:30.055205 kernel: Early memory node ranges May 13 23:57:30.055219 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 23:57:30.055233 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 13 23:57:30.055244 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 13 23:57:30.055252 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 13 23:57:30.055265 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 13 23:57:30.055286 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:57:30.055295 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 23:57:30.055303 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 13 23:57:30.055316 kernel: ACPI: PM-Timer IO Port: 0x408 May 13 23:57:30.055332 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 13 23:57:30.055345 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 13 23:57:30.055353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:57:30.055361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:57:30.055376 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 13 23:57:30.055395 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:57:30.055403 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 13 23:57:30.055412 kernel: Booting paravirtualized kernel on Hyper-V May 13 23:57:30.055429 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:57:30.055440 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:57:30.055448 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:57:30.055459 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:57:30.055475 kernel: pcpu-alloc: [0] 0 1 May 13 23:57:30.055484 kernel: Hyper-V: PV spinlocks enabled May 13 23:57:30.055497 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:57:30.055516 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:30.055526 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:57:30.055533 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 13 23:57:30.055547 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:57:30.055562 kernel: Fallback order for Node 0: 0 May 13 23:57:30.055573 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 13 23:57:30.055581 kernel: Policy zone: Normal May 13 23:57:30.055614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:57:30.055624 kernel: software IO TLB: area num 2. May 13 23:57:30.055633 kernel: Memory: 8072992K/8387460K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 314212K reserved, 0K cma-reserved) May 13 23:57:30.055652 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:57:30.055664 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:57:30.055672 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:57:30.055685 kernel: Dynamic Preempt: voluntary May 13 23:57:30.055702 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:57:30.055718 kernel: rcu: RCU event tracing is enabled. May 13 23:57:30.055726 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:57:30.055741 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:57:30.055758 kernel: Rude variant of Tasks RCU enabled. May 13 23:57:30.055774 kernel: Tracing variant of Tasks RCU enabled. May 13 23:57:30.055783 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:57:30.055794 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:57:30.055811 kernel: Using NULL legacy PIC May 13 23:57:30.055825 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 13 23:57:30.055833 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:57:30.055849 kernel: Console: colour dummy device 80x25 May 13 23:57:30.055861 kernel: printk: console [tty1] enabled May 13 23:57:30.055869 kernel: printk: console [ttyS0] enabled May 13 23:57:30.055881 kernel: printk: bootconsole [earlyser0] disabled May 13 23:57:30.055896 kernel: ACPI: Core revision 20230628 May 13 23:57:30.055911 kernel: Failed to register legacy timer interrupt May 13 23:57:30.055928 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:57:30.055944 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 13 23:57:30.055961 kernel: Hyper-V: Using IPI hypercalls May 13 23:57:30.055974 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 13 23:57:30.055987 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 13 23:57:30.056001 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 13 23:57:30.056015 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 13 23:57:30.056030 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 13 23:57:30.056044 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 13 23:57:30.056059 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) May 13 23:57:30.056076 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 23:57:30.056091 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 23:57:30.056105 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:57:30.056118 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:57:30.056133 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:57:30.056147 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 13 23:57:30.056161 kernel: RETBleed: Vulnerable May 13 23:57:30.056175 kernel: Speculative Store Bypass: Vulnerable May 13 23:57:30.056189 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:57:30.056203 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:57:30.056216 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:57:30.056233 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:57:30.056247 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:57:30.056261 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 13 23:57:30.056276 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 13 23:57:30.056290 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 13 23:57:30.056303 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:57:30.056317 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 13 23:57:30.056331 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 13 23:57:30.056345 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 13 23:57:30.056359 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 13 23:57:30.056373 kernel: Freeing SMP alternatives memory: 32K May 13 23:57:30.056390 kernel: pid_max: default: 32768 minimum: 301 May 13 23:57:30.056404 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:57:30.056418 kernel: landlock: Up and running. May 13 23:57:30.056432 kernel: SELinux: Initializing. May 13 23:57:30.056447 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:57:30.056461 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:57:30.056476 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 13 23:57:30.056490 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:30.056505 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:30.056520 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:30.056537 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 13 23:57:30.056552 kernel: signal: max sigframe size: 3632 May 13 23:57:30.056566 kernel: rcu: Hierarchical SRCU implementation. May 13 23:57:30.056581 kernel: rcu: Max phase no-delay instances is 400. May 13 23:57:30.056595 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:57:30.056609 kernel: smp: Bringing up secondary CPUs ... May 13 23:57:30.056624 kernel: smpboot: x86: Booting SMP configuration: May 13 23:57:30.056638 kernel: .... node #0, CPUs: #1 May 13 23:57:30.056653 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 13 23:57:30.056670 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 13 23:57:30.056685 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:57:30.056699 kernel: smpboot: Max logical packages: 1 May 13 23:57:30.056714 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 13 23:57:30.056728 kernel: devtmpfs: initialized May 13 23:57:30.056742 kernel: x86/mm: Memory block size: 128MB May 13 23:57:30.056757 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 13 23:57:30.056771 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:57:30.056786 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:57:30.056803 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:57:30.056818 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:57:30.056831 kernel: audit: initializing netlink subsys (disabled) May 13 23:57:30.056846 kernel: audit: type=2000 audit(1747180648.027:1): state=initialized audit_enabled=0 res=1 May 13 23:57:30.056860 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:57:30.056874 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:57:30.056889 kernel: cpuidle: using governor menu May 13 23:57:30.056903 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:57:30.056918 kernel: dca service started, version 1.12.1 May 13 23:57:30.056949 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 13 23:57:30.056966 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:57:30.056986 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:57:30.057006 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:57:30.057021 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:57:30.057036 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:57:30.057050 kernel: ACPI: Added _OSI(Module Device) May 13 23:57:30.057064 kernel: ACPI: Added _OSI(Processor Device) May 13 23:57:30.057082 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:57:30.057096 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:57:30.057111 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:57:30.057125 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:57:30.057139 kernel: ACPI: Interpreter enabled May 13 23:57:30.057154 kernel: ACPI: PM: (supports S0 S5) May 13 23:57:30.057168 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:57:30.057182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:57:30.057197 kernel: PCI: Ignoring E820 reservations for host bridge windows May 13 23:57:30.057214 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 13 23:57:30.057228 kernel: iommu: Default domain type: Translated May 13 23:57:30.057243 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:57:30.057257 kernel: efivars: Registered efivars operations May 13 23:57:30.057272 kernel: PCI: Using ACPI for IRQ routing May 13 23:57:30.057286 kernel: PCI: System does not support PCI May 13 23:57:30.057300 kernel: vgaarb: loaded May 13 23:57:30.057314 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 13 23:57:30.057328 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:57:30.057343 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:57:30.057359 kernel: pnp: PnP ACPI init May 13 23:57:30.057374 kernel: pnp: PnP ACPI: found 3 devices May 13 23:57:30.057389 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:57:30.057404 kernel: NET: Registered PF_INET protocol family May 13 23:57:30.057418 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:57:30.057433 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 13 23:57:30.057447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:57:30.057465 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:57:30.057483 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 13 23:57:30.057498 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 13 23:57:30.057512 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 13 23:57:30.057527 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 13 23:57:30.057541 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:57:30.057555 kernel: NET: Registered PF_XDP protocol family May 13 23:57:30.057569 kernel: PCI: CLS 0 bytes, default 64 May 13 23:57:30.057583 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 23:57:30.057597 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) May 13 23:57:30.057615 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:57:30.057629 kernel: Initialise system trusted keyrings May 13 23:57:30.057644 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 13 23:57:30.057658 kernel: Key type asymmetric registered May 13 23:57:30.057672 kernel: Asymmetric key parser 'x509' registered May 13 23:57:30.057686 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:57:30.057700 kernel: io scheduler mq-deadline registered May 13 23:57:30.057714 kernel: io scheduler kyber registered May 13 23:57:30.057729 kernel: io scheduler bfq registered May 13 23:57:30.057743 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:57:30.057760 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:57:30.057774 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:57:30.057789 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 13 23:57:30.057803 kernel: i8042: PNP: No PS/2 controller found. May 13 23:57:30.058021 kernel: rtc_cmos 00:02: registered as rtc0 May 13 23:57:30.058146 kernel: rtc_cmos 00:02: setting system clock to 2025-05-13T23:57:29 UTC (1747180649) May 13 23:57:30.058264 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 13 23:57:30.058285 kernel: intel_pstate: CPU model not supported May 13 23:57:30.058300 kernel: efifb: probing for efifb May 13 23:57:30.058315 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 13 23:57:30.058329 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 13 23:57:30.058344 kernel: efifb: scrolling: redraw May 13 23:57:30.058358 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:57:30.058372 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:57:30.058388 kernel: fb0: EFI VGA frame buffer device May 13 23:57:30.058402 kernel: pstore: Using crash dump compression: deflate May 13 23:57:30.058419 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:57:30.058433 kernel: NET: Registered PF_INET6 protocol family May 13 23:57:30.058448 kernel: Segment Routing with IPv6 May 13 23:57:30.058462 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:57:30.058477 kernel: NET: Registered PF_PACKET protocol family May 13 23:57:30.058491 kernel: Key type dns_resolver registered May 13 23:57:30.058505 kernel: IPI shorthand broadcast: enabled May 13 23:57:30.058519 kernel: sched_clock: Marking stable (752002800, 36534200)->(960422900, -171885900) May 13 23:57:30.058534 kernel: registered taskstats version 1 May 13 23:57:30.058551 kernel: Loading compiled-in X.509 certificates May 13 23:57:30.058565 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:57:30.058579 kernel: Key type .fscrypt registered May 13 23:57:30.058593 kernel: Key type fscrypt-provisioning registered May 13 23:57:30.058608 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:57:30.058622 kernel: ima: Allocated hash algorithm: sha1 May 13 23:57:30.058636 kernel: ima: No architecture policies found May 13 23:57:30.058650 kernel: clk: Disabling unused clocks May 13 23:57:30.058665 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:57:30.058683 kernel: Write protecting the kernel read-only data: 40960k May 13 23:57:30.058697 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:57:30.058711 kernel: Run /init as init process May 13 23:57:30.058726 kernel: with arguments: May 13 23:57:30.058739 kernel: /init May 13 23:57:30.058753 kernel: with environment: May 13 23:57:30.058767 kernel: HOME=/ May 13 23:57:30.058781 kernel: TERM=linux May 13 23:57:30.058795 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:57:30.058813 systemd[1]: Successfully made /usr/ read-only. May 13 23:57:30.058833 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:30.058848 systemd[1]: Detected virtualization microsoft. May 13 23:57:30.058863 systemd[1]: Detected architecture x86-64. May 13 23:57:30.058877 systemd[1]: Running in initrd. May 13 23:57:30.058892 systemd[1]: No hostname configured, using default hostname. May 13 23:57:30.058908 systemd[1]: Hostname set to . May 13 23:57:30.058935 systemd[1]: Initializing machine ID from random generator. May 13 23:57:30.058948 systemd[1]: Queued start job for default target initrd.target. May 13 23:57:30.058962 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:30.058977 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:30.058992 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:57:30.059006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:30.059020 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:57:30.059040 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:57:30.059057 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:57:30.059072 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:57:30.059086 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:30.059101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:30.059116 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:30.059130 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:30.059145 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:30.059162 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:30.059177 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:30.059191 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:30.059207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:57:30.059221 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:57:30.059235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:30.059250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:30.059264 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:30.059277 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:30.059296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:57:30.059311 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:30.059326 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:57:30.059341 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:57:30.059354 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:30.059368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:30.059382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:30.059421 systemd-journald[177]: Collecting audit messages is disabled. May 13 23:57:30.059457 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:57:30.059471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:30.059490 systemd-journald[177]: Journal started May 13 23:57:30.059523 systemd-journald[177]: Runtime Journal (/run/log/journal/c981a66e85334b8989262de719437b64) is 8M, max 158.7M, 150.7M free. May 13 23:57:30.049353 systemd-modules-load[178]: Inserted module 'overlay' May 13 23:57:30.083462 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:30.069938 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:57:30.073107 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:57:30.078046 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:30.094331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:30.107099 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:30.113656 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:57:30.117739 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:30.122971 kernel: Bridge firewalling registered May 13 23:57:30.117972 systemd-modules-load[178]: Inserted module 'br_netfilter' May 13 23:57:30.120697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:30.129039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:30.134799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:30.140196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:30.161861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:30.164477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:30.167028 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:30.181831 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:30.189043 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:57:30.209675 dracut-cmdline[214]: dracut-dracut-053 May 13 23:57:30.213820 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:30.223282 systemd-resolved[208]: Positive Trust Anchors: May 13 23:57:30.223291 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:30.223330 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:30.227106 systemd-resolved[208]: Defaulting to hostname 'linux'. May 13 23:57:30.229174 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:30.233241 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:30.302950 kernel: SCSI subsystem initialized May 13 23:57:30.312943 kernel: Loading iSCSI transport class v2.0-870. May 13 23:57:30.323944 kernel: iscsi: registered transport (tcp) May 13 23:57:30.344149 kernel: iscsi: registered transport (qla4xxx) May 13 23:57:30.344236 kernel: QLogic iSCSI HBA Driver May 13 23:57:30.379265 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:57:30.383738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:57:30.415447 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:57:30.415544 kernel: device-mapper: uevent: version 1.0.3 May 13 23:57:30.420740 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:57:30.460968 kernel: raid6: avx512x4 gen() 18177 MB/s May 13 23:57:30.479936 kernel: raid6: avx512x2 gen() 18183 MB/s May 13 23:57:30.497939 kernel: raid6: avx512x1 gen() 18033 MB/s May 13 23:57:30.515937 kernel: raid6: avx2x4 gen() 18071 MB/s May 13 23:57:30.534937 kernel: raid6: avx2x2 gen() 17966 MB/s May 13 23:57:30.554307 kernel: raid6: avx2x1 gen() 13665 MB/s May 13 23:57:30.554373 kernel: raid6: using algorithm avx512x2 gen() 18183 MB/s May 13 23:57:30.575193 kernel: raid6: .... xor() 25442 MB/s, rmw enabled May 13 23:57:30.575279 kernel: raid6: using avx512x2 recovery algorithm May 13 23:57:30.596951 kernel: xor: automatically using best checksumming function avx May 13 23:57:30.737949 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:57:30.747656 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:30.752062 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:30.780169 systemd-udevd[396]: Using default interface naming scheme 'v255'. May 13 23:57:30.785304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:30.793191 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:57:30.816733 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation May 13 23:57:30.843185 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:30.849385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:30.899029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:30.910502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:57:30.941057 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:57:30.950918 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:30.955186 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:30.955862 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:30.967314 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:57:30.991901 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:31.000945 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:57:31.022577 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:57:31.025793 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:31.028362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:31.033848 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:31.043065 kernel: AES CTR mode by8 optimization enabled May 13 23:57:31.036226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:31.036580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:31.061254 kernel: hv_vmbus: Vmbus version:5.2 May 13 23:57:31.043012 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:31.051280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:31.066269 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:31.075271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:31.076522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:31.088015 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:57:31.093314 kernel: hv_vmbus: registering driver hyperv_keyboard May 13 23:57:31.090571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:31.107551 kernel: hv_vmbus: registering driver hid_hyperv May 13 23:57:31.107587 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 13 23:57:31.117943 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 13 23:57:31.118230 kernel: hv_vmbus: registering driver hv_netvsc May 13 23:57:31.138552 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 13 23:57:31.140966 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:57:31.141006 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:57:31.150274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:31.164559 kernel: PTP clock support registered May 13 23:57:31.166653 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:31.169905 kernel: hv_vmbus: registering driver hv_storvsc May 13 23:57:31.177794 kernel: scsi host0: storvsc_host_t May 13 23:57:31.177841 kernel: hv_utils: Registering HyperV Utility Driver May 13 23:57:31.179617 kernel: hv_vmbus: registering driver hv_utils May 13 23:57:31.179661 kernel: scsi host1: storvsc_host_t May 13 23:57:31.179871 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 13 23:57:31.181935 kernel: hv_utils: Heartbeat IC version 3.0 May 13 23:57:31.181991 kernel: hv_utils: Shutdown IC version 3.2 May 13 23:57:31.744765 kernel: hv_utils: TimeSync IC version 4.0 May 13 23:57:31.744802 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 13 23:57:31.744729 systemd-resolved[208]: Clock change detected. Flushing caches. May 13 23:57:31.775368 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 13 23:57:31.775678 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:57:31.778521 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 13 23:57:31.783143 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:31.792814 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 13 23:57:31.793143 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 13 23:57:31.795522 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 23:57:31.795715 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 13 23:57:31.795841 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 13 23:57:31.804601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:31.804635 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 23:57:31.841019 kernel: hv_netvsc 7ced8d41-f078-7ced-8d41-f0787ced8d41 eth0: VF slot 1 added May 13 23:57:31.841259 kernel: hv_vmbus: registering driver hv_pci May 13 23:57:31.841280 kernel: hv_pci d08c44bf-8f17-49d4-b5c2-0b671112c469: PCI VMBus probing: Using version 0x10004 May 13 23:57:31.848642 kernel: hv_pci d08c44bf-8f17-49d4-b5c2-0b671112c469: PCI host bridge to bus 8f17:00 May 13 23:57:31.848904 kernel: pci_bus 8f17:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 13 23:57:31.851453 kernel: pci_bus 8f17:00: No busn resource found for root bus, will use [bus 00-ff] May 13 23:57:31.855559 kernel: pci 8f17:00:02.0: [15b3:1016] type 00 class 0x020000 May 13 23:57:31.862704 kernel: pci 8f17:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 13 23:57:31.862748 kernel: pci 8f17:00:02.0: enabling Extended Tags May 13 23:57:31.876396 kernel: pci 8f17:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8f17:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 13 23:57:31.876652 kernel: pci_bus 8f17:00: busn_res: [bus 00-ff] end is updated to 00 May 13 23:57:31.879898 kernel: pci 8f17:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 13 23:57:32.042380 kernel: mlx5_core 8f17:00:02.0: enabling device (0000 -> 0002) May 13 23:57:32.046522 kernel: mlx5_core 8f17:00:02.0: firmware version: 14.30.5000 May 13 23:57:32.270001 kernel: hv_netvsc 7ced8d41-f078-7ced-8d41-f0787ced8d41 eth0: VF registering: eth1 May 13 23:57:32.270364 kernel: mlx5_core 8f17:00:02.0 eth1: joined to eth0 May 13 23:57:32.273533 kernel: mlx5_core 8f17:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 13 23:57:32.282535 kernel: mlx5_core 8f17:00:02.0 enP36631s1: renamed from eth1 May 13 23:57:32.431241 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 13 23:57:32.440527 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (456) May 13 23:57:32.473099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:57:32.494861 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 13 23:57:32.513525 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (444) May 13 23:57:32.531615 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 13 23:57:32.537531 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 13 23:57:32.543216 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:57:32.564385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:32.571554 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:33.576626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:33.580241 disk-uuid[603]: The operation has completed successfully. May 13 23:57:33.685235 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:57:33.685360 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:57:33.700039 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:57:33.715913 sh[689]: Success May 13 23:57:33.746628 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:57:33.963989 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:57:33.971618 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:57:33.981016 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:57:33.993524 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:57:33.993569 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:33.998345 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:57:34.000828 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:57:34.003032 kernel: BTRFS info (device dm-0): using free space tree May 13 23:57:34.252778 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:57:34.256851 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:57:34.261541 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:57:34.273411 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:57:34.301859 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:34.301930 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:34.301953 kernel: BTRFS info (device sda6): using free space tree May 13 23:57:34.320525 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:57:34.327558 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:34.330570 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:57:34.335278 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:57:34.369405 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:34.373633 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:34.402401 systemd-networkd[870]: lo: Link UP May 13 23:57:34.402412 systemd-networkd[870]: lo: Gained carrier May 13 23:57:34.404772 systemd-networkd[870]: Enumeration completed May 13 23:57:34.404994 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:34.407465 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:34.407473 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:34.411423 systemd[1]: Reached target network.target - Network. May 13 23:57:34.465525 kernel: mlx5_core 8f17:00:02.0 enP36631s1: Link up May 13 23:57:34.495988 kernel: hv_netvsc 7ced8d41-f078-7ced-8d41-f0787ced8d41 eth0: Data path switched to VF: enP36631s1 May 13 23:57:34.495577 systemd-networkd[870]: enP36631s1: Link UP May 13 23:57:34.495697 systemd-networkd[870]: eth0: Link UP May 13 23:57:34.495874 systemd-networkd[870]: eth0: Gained carrier May 13 23:57:34.495887 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:34.502171 systemd-networkd[870]: enP36631s1: Gained carrier May 13 23:57:34.527564 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:57:35.132963 ignition[823]: Ignition 2.20.0 May 13 23:57:35.132978 ignition[823]: Stage: fetch-offline May 13 23:57:35.133016 ignition[823]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:35.133027 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:35.133142 ignition[823]: parsed url from cmdline: "" May 13 23:57:35.133146 ignition[823]: no config URL provided May 13 23:57:35.133154 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:57:35.133164 ignition[823]: no config at "/usr/lib/ignition/user.ign" May 13 23:57:35.133171 ignition[823]: failed to fetch config: resource requires networking May 13 23:57:35.134489 ignition[823]: Ignition finished successfully May 13 23:57:35.149218 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:35.154648 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:57:35.172587 ignition[879]: Ignition 2.20.0 May 13 23:57:35.172597 ignition[879]: Stage: fetch May 13 23:57:35.172805 ignition[879]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:35.172819 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:35.172913 ignition[879]: parsed url from cmdline: "" May 13 23:57:35.172916 ignition[879]: no config URL provided May 13 23:57:35.172921 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:57:35.172929 ignition[879]: no config at "/usr/lib/ignition/user.ign" May 13 23:57:35.172955 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 13 23:57:35.274025 ignition[879]: GET result: OK May 13 23:57:35.274146 ignition[879]: config has been read from IMDS userdata May 13 23:57:35.274227 ignition[879]: parsing config with SHA512: 19489bd21c2391fa50910c83797dd4d7e4c8601ae6b6ffd09a76c95a8536fb4b30fa30d19580cf912736c34a7a961af2f3fc98ce7969f41dcbaa4653e94f7e48 May 13 23:57:35.281353 unknown[879]: fetched base config from "system" May 13 23:57:35.281601 unknown[879]: fetched base config from "system" May 13 23:57:35.281971 ignition[879]: fetch: fetch complete May 13 23:57:35.281608 unknown[879]: fetched user config from "azure" May 13 23:57:35.281976 ignition[879]: fetch: fetch passed May 13 23:57:35.283676 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:57:35.282024 ignition[879]: Ignition finished successfully May 13 23:57:35.290638 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:57:35.312126 ignition[885]: Ignition 2.20.0 May 13 23:57:35.312137 ignition[885]: Stage: kargs May 13 23:57:35.312333 ignition[885]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:35.315085 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:57:35.312347 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:35.313230 ignition[885]: kargs: kargs passed May 13 23:57:35.322639 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:57:35.313273 ignition[885]: Ignition finished successfully May 13 23:57:35.346271 ignition[891]: Ignition 2.20.0 May 13 23:57:35.346281 ignition[891]: Stage: disks May 13 23:57:35.346478 ignition[891]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:35.348173 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:57:35.346491 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:35.351367 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:57:35.347364 ignition[891]: disks: disks passed May 13 23:57:35.357029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:57:35.347403 ignition[891]: Ignition finished successfully May 13 23:57:35.363656 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:35.374732 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:35.376780 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:35.383104 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:57:35.547589 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 13 23:57:35.551810 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:57:35.559675 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:57:35.656521 kernel: EXT4-fs (sda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:57:35.657084 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:57:35.659214 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:57:35.701416 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:35.707871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:57:35.716693 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:57:35.722560 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:57:35.723554 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:35.734522 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (911) May 13 23:57:35.740545 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:35.740598 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:35.742730 kernel: BTRFS info (device sda6): using free space tree May 13 23:57:35.744882 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:57:35.750527 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:57:35.751264 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:35.754944 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:57:36.009718 systemd-networkd[870]: eth0: Gained IPv6LL May 13 23:57:36.201747 systemd-networkd[870]: enP36631s1: Gained IPv6LL May 13 23:57:36.308723 coreos-metadata[913]: May 13 23:57:36.308 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:57:36.313731 coreos-metadata[913]: May 13 23:57:36.313 INFO Fetch successful May 13 23:57:36.315873 coreos-metadata[913]: May 13 23:57:36.313 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 13 23:57:36.320996 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:57:36.327287 coreos-metadata[913]: May 13 23:57:36.327 INFO Fetch successful May 13 23:57:36.329186 coreos-metadata[913]: May 13 23:57:36.328 INFO wrote hostname ci-4284.0.0-n-1d9e750aa6 to /sysroot/etc/hostname May 13 23:57:36.334496 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:57:36.343619 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory May 13 23:57:36.363720 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:57:36.383141 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:57:37.182831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:57:37.188519 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:57:37.204643 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:57:37.212313 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:57:37.217612 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:37.244264 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:57:37.248769 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:57:37.254227 ignition[1029]: INFO : Ignition 2.20.0 May 13 23:57:37.254227 ignition[1029]: INFO : Stage: mount May 13 23:57:37.254227 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:37.254227 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:37.254227 ignition[1029]: INFO : mount: mount passed May 13 23:57:37.254227 ignition[1029]: INFO : Ignition finished successfully May 13 23:57:37.254621 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:57:37.272862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:37.296524 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1042) May 13 23:57:37.296566 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:37.299519 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:37.303144 kernel: BTRFS info (device sda6): using free space tree May 13 23:57:37.308523 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:57:37.309778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:37.335665 ignition[1059]: INFO : Ignition 2.20.0 May 13 23:57:37.335665 ignition[1059]: INFO : Stage: files May 13 23:57:37.338946 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:37.338946 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:37.338946 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping May 13 23:57:37.351390 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:57:37.351390 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:57:37.414604 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:57:37.417834 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:57:37.417834 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:57:37.415136 unknown[1059]: wrote ssh authorized keys file for user: core May 13 23:57:37.444306 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:57:37.448678 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 23:57:37.492031 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:57:37.628813 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:57:37.633160 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:57:37.633160 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 23:57:38.202463 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:57:38.455094 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:57:38.455094 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:57:38.463221 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:57:38.463221 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:38.463221 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:38.463221 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:57:38.477836 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 23:57:38.918476 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:57:39.178576 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:57:39.178576 ignition[1059]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:57:39.218390 ignition[1059]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:39.222662 ignition[1059]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:39.222662 ignition[1059]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:57:39.229035 ignition[1059]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 13 23:57:39.229035 ignition[1059]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:57:39.234980 ignition[1059]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:39.239220 ignition[1059]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:39.242825 ignition[1059]: INFO : files: files passed May 13 23:57:39.242825 ignition[1059]: INFO : Ignition finished successfully May 13 23:57:39.245433 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:57:39.251095 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:57:39.254876 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:57:39.274414 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:39.274414 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:39.281473 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:39.278434 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:39.292653 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:57:39.297876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:57:39.304747 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:57:39.304864 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:57:39.342528 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:57:39.342638 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:57:39.347126 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:57:39.351534 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:57:39.353663 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:57:39.356625 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:57:39.378590 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:39.384816 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:57:39.401285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:39.406584 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:39.409098 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:57:39.415098 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:57:39.415244 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:39.422043 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:57:39.426197 systemd[1]: Stopped target basic.target - Basic System. May 13 23:57:39.428061 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:57:39.434103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:39.436539 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:57:39.443044 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:57:39.445286 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:39.449673 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:57:39.456197 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:57:39.458611 systemd[1]: Stopped target swap.target - Swaps. May 13 23:57:39.462394 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:57:39.462574 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:39.466481 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:39.470170 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:39.474568 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:57:39.476724 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:39.482161 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:57:39.484147 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:57:39.489398 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:57:39.489574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:39.493596 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:57:39.493736 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:57:39.502418 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:57:39.502586 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:57:39.511649 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:57:39.525176 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:57:39.529626 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:57:39.529801 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:39.535574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:57:39.536428 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:39.546278 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:57:39.549819 ignition[1113]: INFO : Ignition 2.20.0 May 13 23:57:39.549819 ignition[1113]: INFO : Stage: umount May 13 23:57:39.557145 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:39.557145 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:39.557145 ignition[1113]: INFO : umount: umount passed May 13 23:57:39.557145 ignition[1113]: INFO : Ignition finished successfully May 13 23:57:39.550626 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:57:39.553434 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:57:39.553524 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:57:39.564903 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:57:39.564959 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:57:39.568776 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:57:39.568823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:57:39.572943 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:57:39.573003 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:57:39.575129 systemd[1]: Stopped target network.target - Network. May 13 23:57:39.578592 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:57:39.578653 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:39.581070 systemd[1]: Stopped target paths.target - Path Units. May 13 23:57:39.582144 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:57:39.582847 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:39.587197 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:57:39.590892 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:57:39.617457 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:57:39.617526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:39.623129 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:57:39.623213 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:39.630910 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:57:39.630987 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:57:39.634933 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:57:39.634989 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:57:39.643343 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:57:39.647868 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:57:39.651343 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:57:39.651971 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:57:39.652072 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:57:39.662268 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:57:39.662770 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:57:39.662876 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:57:39.667389 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:57:39.667464 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:39.671937 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:57:39.671998 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:57:39.674476 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:57:39.688304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:57:39.688371 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:39.695547 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:39.703084 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:57:39.703977 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:57:39.711558 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:57:39.715038 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:57:39.716323 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:39.723802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:57:39.725262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:57:39.726476 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:57:39.726742 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:39.732121 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:57:39.732180 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:39.736112 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:57:39.736172 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:57:39.750005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:39.750080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:39.758024 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:57:39.760557 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:57:39.760627 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:39.769993 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:57:39.770062 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:57:39.774204 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:57:39.774251 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:39.780818 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:57:39.780869 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:39.783517 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:57:39.803119 kernel: hv_netvsc 7ced8d41-f078-7ced-8d41-f0787ced8d41 eth0: Data path switched from VF: enP36631s1 May 13 23:57:39.783566 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:39.788550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:57:39.788602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:39.793288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:39.793335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:39.806419 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:57:39.806476 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:57:39.806533 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:39.806574 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:39.807095 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:57:39.807199 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:57:39.835268 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:57:39.835396 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:57:39.839580 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:57:39.845635 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:57:39.911638 systemd[1]: Switching root. May 13 23:57:40.046480 systemd-journald[177]: Journal stopped May 13 23:58:01.578780 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). May 13 23:58:01.578825 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:58:01.578848 kernel: SELinux: policy capability open_perms=1 May 13 23:58:01.578862 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:58:01.578875 kernel: SELinux: policy capability always_check_network=0 May 13 23:58:01.578889 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:58:01.578905 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:58:01.578918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:58:01.578935 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:58:01.578948 kernel: audit: type=1403 audit(1747180662.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:58:01.578964 systemd[1]: Successfully loaded SELinux policy in 140.824ms. May 13 23:58:01.578979 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.969ms. May 13 23:58:01.578995 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:58:01.579011 systemd[1]: Detected virtualization microsoft. May 13 23:58:01.579030 systemd[1]: Detected architecture x86-64. May 13 23:58:01.579045 systemd[1]: Detected first boot. May 13 23:58:01.579061 systemd[1]: Hostname set to . May 13 23:58:01.579076 systemd[1]: Initializing machine ID from random generator. May 13 23:58:01.579091 zram_generator::config[1157]: No configuration found. May 13 23:58:01.579109 kernel: Guest personality initialized and is inactive May 13 23:58:01.579123 kernel: VMCI host device registered (name=vmci, major=10, minor=124) May 13 23:58:01.579136 kernel: Initialized host personality May 13 23:58:01.579149 kernel: NET: Registered PF_VSOCK protocol family May 13 23:58:01.579163 systemd[1]: Populated /etc with preset unit settings. May 13 23:58:01.579180 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:58:01.579197 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:58:01.579213 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:58:01.579230 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:58:01.579250 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:58:01.579269 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:58:01.579286 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:58:01.579304 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:58:01.579322 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:58:01.579340 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:58:01.579357 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:58:01.579378 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:58:01.579395 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:58:01.579414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:58:01.579431 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:58:01.579450 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:58:01.579474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:58:01.579492 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:58:01.579542 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:58:01.579562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:58:01.579576 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:58:01.579591 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:58:01.579606 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:58:01.579620 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:58:01.579634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:58:01.579648 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:58:01.579668 systemd[1]: Reached target slices.target - Slice Units. May 13 23:58:01.579684 systemd[1]: Reached target swap.target - Swaps. May 13 23:58:01.579700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:58:01.579715 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:58:01.579733 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:58:01.579750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:58:01.579770 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:58:01.579786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:58:01.579801 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:58:01.579817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:58:01.579833 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:58:01.579849 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:58:01.579866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:01.579888 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:58:01.579907 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:58:01.579924 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:58:01.579942 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:58:01.579959 systemd[1]: Reached target machines.target - Containers. May 13 23:58:01.579975 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:58:01.579992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:01.580010 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:58:01.580026 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:58:01.580044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:01.580060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:58:01.580076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:01.580091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:58:01.580107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:01.580124 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:58:01.580140 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:58:01.580156 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:58:01.580175 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:58:01.580191 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:58:01.580207 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:01.580223 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:58:01.580240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:58:01.580256 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:58:01.580272 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:58:01.580289 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:58:01.580309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:58:01.580325 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:58:01.580341 systemd[1]: Stopped verity-setup.service. May 13 23:58:01.580356 kernel: fuse: init (API version 7.39) May 13 23:58:01.580373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:01.580389 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:58:01.580406 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:58:01.580422 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:58:01.580441 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:58:01.580457 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:58:01.580473 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:58:01.580489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:58:01.580534 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:58:01.580549 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:58:01.580564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:01.580578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:01.580598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:01.580613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:01.580625 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:58:01.580636 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:58:01.580650 kernel: loop: module loaded May 13 23:58:01.580691 systemd-journald[1240]: Collecting audit messages is disabled. May 13 23:58:01.580723 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:01.580736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:01.580748 systemd-journald[1240]: Journal started May 13 23:58:01.580773 systemd-journald[1240]: Runtime Journal (/run/log/journal/227de1b07661430fbac6958b8c3419dc) is 8M, max 158.7M, 150.7M free. May 13 23:58:00.959657 systemd[1]: Queued start job for default target multi-user.target. May 13 23:58:00.967382 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 23:58:00.967792 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:58:01.591252 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:58:01.593705 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:58:01.597080 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:58:01.600239 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:58:01.603379 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:58:01.619398 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:58:01.625608 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:58:01.659103 kernel: ACPI: bus type drm_connector registered May 13 23:58:01.650176 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:58:01.655621 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:58:01.655680 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:58:01.662308 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:58:01.674658 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:58:01.679749 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:58:01.682892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:01.696633 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:58:01.701715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:58:01.704268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:58:01.709826 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:58:01.712329 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:58:01.716782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:01.722433 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:58:01.730750 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:58:01.737856 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:58:01.739555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:58:01.743898 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:58:01.749108 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:58:01.752594 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:58:01.755660 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:58:01.767617 systemd-journald[1240]: Time spent on flushing to /var/log/journal/227de1b07661430fbac6958b8c3419dc is 22.643ms for 977 entries. May 13 23:58:01.767617 systemd-journald[1240]: System Journal (/var/log/journal/227de1b07661430fbac6958b8c3419dc) is 8M, max 2.6G, 2.6G free. May 13 23:58:01.937109 systemd-journald[1240]: Received client request to flush runtime journal. May 13 23:58:01.937171 kernel: loop0: detected capacity change from 0 to 218376 May 13 23:58:01.763765 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:58:01.776538 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:58:01.789071 udevadm[1307]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:58:01.915923 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:58:01.918738 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:58:01.925702 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:58:01.932132 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. May 13 23:58:01.932153 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. May 13 23:58:01.939392 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:58:01.944382 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:58:01.952652 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:58:01.964986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:02.003535 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:58:02.120614 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:58:02.121427 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:58:02.245533 kernel: loop1: detected capacity change from 0 to 151640 May 13 23:58:03.811853 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:58:03.819687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:58:03.844841 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 13 23:58:03.844865 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 13 23:58:03.849397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:58:05.521539 kernel: loop2: detected capacity change from 0 to 109808 May 13 23:58:07.493071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:58:07.497702 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:58:07.535890 systemd-udevd[1327]: Using default interface naming scheme 'v255'. May 13 23:58:08.207570 kernel: loop3: detected capacity change from 0 to 28424 May 13 23:58:08.459875 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:58:08.471771 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:58:08.530684 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:58:08.829897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:08.835341 kernel: hv_vmbus: registering driver hyperv_fb May 13 23:58:08.843607 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 13 23:58:08.843710 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 13 23:58:08.846722 kernel: Console: switching to colour dummy device 80x25 May 13 23:58:08.850142 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:58:08.858921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:58:08.859149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:08.862683 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:08.865540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:08.877672 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:58:08.899429 kernel: hv_vmbus: registering driver hv_balloon May 13 23:58:08.978542 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 13 23:58:08.987754 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:58:09.163535 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:58:09.947558 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1345) May 13 23:58:10.035927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:58:10.040890 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:58:10.058798 systemd-networkd[1338]: lo: Link UP May 13 23:58:10.058807 systemd-networkd[1338]: lo: Gained carrier May 13 23:58:10.062182 systemd-networkd[1338]: Enumeration completed May 13 23:58:10.062408 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:58:10.064818 systemd-networkd[1338]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:10.064959 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:58:10.066692 systemd-networkd[1338]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:10.069672 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:58:10.126540 kernel: mlx5_core 8f17:00:02.0 enP36631s1: Link up May 13 23:58:10.130526 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 13 23:58:10.144528 kernel: hv_netvsc 7ced8d41-f078-7ced-8d41-f0787ced8d41 eth0: Data path switched to VF: enP36631s1 May 13 23:58:10.144810 systemd-networkd[1338]: enP36631s1: Link UP May 13 23:58:10.144937 systemd-networkd[1338]: eth0: Link UP May 13 23:58:10.144943 systemd-networkd[1338]: eth0: Gained carrier May 13 23:58:10.144967 systemd-networkd[1338]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:10.150919 systemd-networkd[1338]: enP36631s1: Gained carrier May 13 23:58:10.183576 systemd-networkd[1338]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:58:10.614102 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:58:10.662900 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:58:10.949572 kernel: loop4: detected capacity change from 0 to 218376 May 13 23:58:10.959524 kernel: loop5: detected capacity change from 0 to 151640 May 13 23:58:10.979531 kernel: loop6: detected capacity change from 0 to 109808 May 13 23:58:10.990556 kernel: loop7: detected capacity change from 0 to 28424 May 13 23:58:10.997331 (sd-merge)[1447]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 13 23:58:10.997958 (sd-merge)[1447]: Merged extensions into '/usr'. May 13 23:58:11.001765 systemd[1]: Reload requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:58:11.001780 systemd[1]: Reloading... May 13 23:58:11.078540 zram_generator::config[1478]: No configuration found. May 13 23:58:11.209732 systemd-networkd[1338]: enP36631s1: Gained IPv6LL May 13 23:58:11.401680 systemd-networkd[1338]: eth0: Gained IPv6LL May 13 23:58:11.713445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:11.823096 systemd[1]: Reloading finished in 820 ms. May 13 23:58:11.845241 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:58:11.846646 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:58:11.847321 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:58:11.862108 systemd[1]: Starting ensure-sysext.service... May 13 23:58:11.867671 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:58:11.873826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:58:11.892226 systemd[1]: Reload requested from client PID 1540 ('systemctl') (unit ensure-sysext.service)... May 13 23:58:11.892246 systemd[1]: Reloading... May 13 23:58:11.973538 zram_generator::config[1575]: No configuration found. May 13 23:58:12.021406 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:58:12.024214 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:58:12.025442 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:58:12.026094 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. May 13 23:58:12.026273 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. May 13 23:58:12.032021 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:58:12.032040 systemd-tmpfiles[1542]: Skipping /boot May 13 23:58:12.046707 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:58:12.046722 systemd-tmpfiles[1542]: Skipping /boot May 13 23:58:12.118197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:12.228323 systemd[1]: Reloading finished in 335 ms. May 13 23:58:12.253112 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:58:12.260078 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:58:12.358764 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:58:12.365599 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:58:12.372723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:58:12.379428 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:58:12.391732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:12.391985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:12.395560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:12.399146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:12.406865 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:12.702007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:12.702246 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:12.702379 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:12.705921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:12.706090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:12.709151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:12.709323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:12.710564 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:12.710729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:12.717078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:12.717331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:12.718738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:12.723867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:12.731411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:12.733716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:12.734006 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:12.734279 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:12.737229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:12.737379 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:12.738706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:12.738861 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:12.739820 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:12.739976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:12.744352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:12.744770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:12.746202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:12.750708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:58:12.766440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:12.771677 lvm[1541]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:58:12.774819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:12.784782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:12.784956 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:12.785182 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:58:12.788841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:12.794875 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:58:12.800186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:12.800417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:12.803313 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:58:12.803570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:58:12.804741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:12.804888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:12.805414 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:12.805571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:12.816064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:58:12.816210 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:58:12.817729 systemd[1]: Finished ensure-sysext.service. May 13 23:58:13.033277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:13.100753 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:58:13.119296 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:58:13.122246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:58:13.125721 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:58:13.140435 lvm[1685]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:58:13.157689 systemd-resolved[1644]: Positive Trust Anchors: May 13 23:58:13.157704 systemd-resolved[1644]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:58:13.157765 systemd-resolved[1644]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:58:13.171317 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:58:13.216996 augenrules[1689]: No rules May 13 23:58:13.217946 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:58:13.218219 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:58:13.378757 systemd-resolved[1644]: Using system hostname 'ci-4284.0.0-n-1d9e750aa6'. May 13 23:58:13.380512 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:58:13.383076 systemd[1]: Reached target network.target - Network. May 13 23:58:13.385134 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:58:13.387547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:58:22.423084 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:58:22.426457 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:58:25.800694 ldconfig[1292]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:58:25.853112 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:58:25.857434 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:58:25.880355 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:58:25.883355 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:58:25.886031 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:58:25.889368 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:58:25.892105 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:58:25.894488 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:58:25.897180 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:58:25.900068 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:58:25.900109 systemd[1]: Reached target paths.target - Path Units. May 13 23:58:25.902101 systemd[1]: Reached target timers.target - Timer Units. May 13 23:58:25.906734 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:58:25.910335 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:58:25.915068 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:58:25.917821 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:58:25.920379 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:58:25.931908 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:58:25.935144 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:58:25.938234 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:58:25.940567 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:58:25.942651 systemd[1]: Reached target basic.target - Basic System. May 13 23:58:25.944604 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:58:25.944639 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:58:25.947021 systemd[1]: Starting chronyd.service - NTP client/server... May 13 23:58:25.951588 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:58:25.963668 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:58:25.969601 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:58:25.976140 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:58:25.980273 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:58:25.982591 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:58:25.982640 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 13 23:58:25.984741 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 13 23:58:25.990816 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 13 23:58:25.995670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:26.002742 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:58:26.008481 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:58:26.013786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:58:26.018726 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:58:26.023764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:58:26.035665 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:58:26.039080 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:58:26.041087 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:58:26.043054 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:58:26.049948 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:58:26.051347 (chronyd)[1701]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 13 23:58:26.070974 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:58:26.071993 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:58:26.095811 jq[1705]: false May 13 23:58:26.103484 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:58:26.103781 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:58:26.123643 KVP[1709]: KVP starting; pid is:1709 May 13 23:58:26.126893 (ntainerd)[1734]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:58:26.129306 chronyd[1742]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 13 23:58:26.132298 jq[1720]: true May 13 23:58:26.140046 KVP[1709]: KVP LIC Version: 3.1 May 13 23:58:26.140659 kernel: hv_utils: KVP IC version 4.0 May 13 23:58:26.142922 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:58:26.143217 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:58:26.149242 extend-filesystems[1708]: Found loop4 May 13 23:58:26.151835 extend-filesystems[1708]: Found loop5 May 13 23:58:26.151835 extend-filesystems[1708]: Found loop6 May 13 23:58:26.151835 extend-filesystems[1708]: Found loop7 May 13 23:58:26.151835 extend-filesystems[1708]: Found sda May 13 23:58:26.151835 extend-filesystems[1708]: Found sda1 May 13 23:58:26.151835 extend-filesystems[1708]: Found sda2 May 13 23:58:26.151835 extend-filesystems[1708]: Found sda3 May 13 23:58:26.151835 extend-filesystems[1708]: Found usr May 13 23:58:26.151835 extend-filesystems[1708]: Found sda4 May 13 23:58:26.151835 extend-filesystems[1708]: Found sda6 May 13 23:58:26.151835 extend-filesystems[1708]: Found sda7 May 13 23:58:26.151835 extend-filesystems[1708]: Found sda9 May 13 23:58:26.151835 extend-filesystems[1708]: Checking size of /dev/sda9 May 13 23:58:26.177265 chronyd[1742]: Timezone right/UTC failed leap second check, ignoring May 13 23:58:26.179915 jq[1745]: true May 13 23:58:26.177470 chronyd[1742]: Loaded seccomp filter (level 2) May 13 23:58:26.181181 systemd[1]: Started chronyd.service - NTP client/server. May 13 23:58:26.195240 update_engine[1719]: I20250513 23:58:26.195088 1719 main.cc:92] Flatcar Update Engine starting May 13 23:58:26.212576 tar[1726]: linux-amd64/LICENSE May 13 23:58:26.212866 tar[1726]: linux-amd64/helm May 13 23:58:26.224564 extend-filesystems[1708]: Old size kept for /dev/sda9 May 13 23:58:26.228575 extend-filesystems[1708]: Found sr0 May 13 23:58:26.231018 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:58:26.231577 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:58:26.255263 systemd-logind[1718]: New seat seat0. May 13 23:58:26.276803 systemd-logind[1718]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 13 23:58:26.276995 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:58:26.303768 bash[1766]: Updated "/home/core/.ssh/authorized_keys" May 13 23:58:26.307976 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:58:26.311595 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:58:26.337943 dbus-daemon[1704]: [system] SELinux support is enabled May 13 23:58:26.338460 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:58:26.346976 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:58:26.347080 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:58:26.351111 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:58:26.351132 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:58:26.365432 dbus-daemon[1704]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:58:26.370055 systemd[1]: Started update-engine.service - Update Engine. May 13 23:58:26.374468 update_engine[1719]: I20250513 23:58:26.374068 1719 update_check_scheduler.cc:74] Next update check in 8m3s May 13 23:58:26.376091 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:58:26.527089 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1777) May 13 23:58:26.536396 coreos-metadata[1703]: May 13 23:58:26.536 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:58:26.541360 coreos-metadata[1703]: May 13 23:58:26.541 INFO Fetch successful May 13 23:58:26.541360 coreos-metadata[1703]: May 13 23:58:26.541 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 13 23:58:26.546616 coreos-metadata[1703]: May 13 23:58:26.546 INFO Fetch successful May 13 23:58:26.548004 coreos-metadata[1703]: May 13 23:58:26.547 INFO Fetching http://168.63.129.16/machine/0c988c14-1d9d-4bde-9e23-5e3b324ebfc6/d5b580c6%2D6158%2D478f%2Da89a%2D9b370284f0f6.%5Fci%2D4284.0.0%2Dn%2D1d9e750aa6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 13 23:58:26.552123 coreos-metadata[1703]: May 13 23:58:26.551 INFO Fetch successful May 13 23:58:26.552366 coreos-metadata[1703]: May 13 23:58:26.552 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 13 23:58:26.568302 coreos-metadata[1703]: May 13 23:58:26.566 INFO Fetch successful May 13 23:58:26.651859 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:58:26.656322 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:58:26.876841 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:58:26.976848 locksmithd[1783]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:58:27.006898 sshd_keygen[1749]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:58:27.056964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:58:27.063826 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:58:27.076624 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 13 23:58:27.107251 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:58:27.107666 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:58:27.120084 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:58:27.140645 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 13 23:58:27.169622 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:58:27.177832 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:58:27.182206 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:58:27.187315 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:58:27.368958 tar[1726]: linux-amd64/README.md May 13 23:58:27.388392 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:58:27.800175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:27.811882 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:27.887357 containerd[1734]: time="2025-05-13T23:58:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:58:27.888181 containerd[1734]: time="2025-05-13T23:58:27.888112200Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:58:27.901238 containerd[1734]: time="2025-05-13T23:58:27.901200100Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8µs" May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901352100Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901397800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901562200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901582700Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901620600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901689100Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901703300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901963000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901983500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.901999300Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:58:27.902058 containerd[1734]: time="2025-05-13T23:58:27.902010600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:58:27.902461 containerd[1734]: time="2025-05-13T23:58:27.902137500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:58:27.902461 containerd[1734]: time="2025-05-13T23:58:27.902373300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:58:27.902461 containerd[1734]: time="2025-05-13T23:58:27.902411000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:58:27.902461 containerd[1734]: time="2025-05-13T23:58:27.902425800Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:58:27.903796 containerd[1734]: time="2025-05-13T23:58:27.902465600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:58:27.903796 containerd[1734]: time="2025-05-13T23:58:27.902777000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:58:27.903796 containerd[1734]: time="2025-05-13T23:58:27.902851600Z" level=info msg="metadata content store policy set" policy=shared May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.913976800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914044200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914072000Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914092700Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914109600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914124400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914142300Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914159100Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914174300Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914195200Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914210500Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914226800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914360900Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:58:27.915404 containerd[1734]: time="2025-05-13T23:58:27.914385400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914402700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914417500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914437700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914453000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914469100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914485800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914523900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914541300Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914556200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914629700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914652700Z" level=info msg="Start snapshots syncer" May 13 23:58:27.915929 containerd[1734]: time="2025-05-13T23:58:27.914677800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:58:27.916316 containerd[1734]: time="2025-05-13T23:58:27.914986100Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:58:27.916316 containerd[1734]: time="2025-05-13T23:58:27.915050500Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915136100Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915250600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915277600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915293100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915309900Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915327200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915341700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915357800Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915389200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915406900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915418200Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915454300Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915472600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:58:27.916526 containerd[1734]: time="2025-05-13T23:58:27.915484300Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:58:27.917001 containerd[1734]: time="2025-05-13T23:58:27.915496900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:58:27.917001 containerd[1734]: time="2025-05-13T23:58:27.915724700Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:58:27.917001 containerd[1734]: time="2025-05-13T23:58:27.915744600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:58:27.917001 containerd[1734]: time="2025-05-13T23:58:27.915760700Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:58:27.917001 containerd[1734]: time="2025-05-13T23:58:27.915801400Z" level=info msg="runtime interface created" May 13 23:58:27.917001 containerd[1734]: time="2025-05-13T23:58:27.916152400Z" level=info msg="created NRI interface" May 13 23:58:27.918301 containerd[1734]: time="2025-05-13T23:58:27.917653800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:58:27.918301 containerd[1734]: time="2025-05-13T23:58:27.917772600Z" level=info msg="Connect containerd service" May 13 23:58:27.918301 containerd[1734]: time="2025-05-13T23:58:27.917830500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:58:27.919198 containerd[1734]: time="2025-05-13T23:58:27.919169600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:58:28.312678 kubelet[1886]: E0513 23:58:28.312621 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:28.314921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:28.315115 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:28.315796 systemd[1]: kubelet.service: Consumed 930ms CPU time, 253M memory peak. May 13 23:58:28.762243 containerd[1734]: time="2025-05-13T23:58:28.762067600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:58:28.762243 containerd[1734]: time="2025-05-13T23:58:28.762148100Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:58:28.762243 containerd[1734]: time="2025-05-13T23:58:28.762190500Z" level=info msg="Start subscribing containerd event" May 13 23:58:28.762243 containerd[1734]: time="2025-05-13T23:58:28.762226900Z" level=info msg="Start recovering state" May 13 23:58:28.762614 containerd[1734]: time="2025-05-13T23:58:28.762336900Z" level=info msg="Start event monitor" May 13 23:58:28.762614 containerd[1734]: time="2025-05-13T23:58:28.762365100Z" level=info msg="Start cni network conf syncer for default" May 13 23:58:28.762614 containerd[1734]: time="2025-05-13T23:58:28.762382500Z" level=info msg="Start streaming server" May 13 23:58:28.762614 containerd[1734]: time="2025-05-13T23:58:28.762392600Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:58:28.762614 containerd[1734]: time="2025-05-13T23:58:28.762414000Z" level=info msg="runtime interface starting up..." May 13 23:58:28.762614 containerd[1734]: time="2025-05-13T23:58:28.762423400Z" level=info msg="starting plugins..." May 13 23:58:28.762614 containerd[1734]: time="2025-05-13T23:58:28.762442600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:58:28.765770 containerd[1734]: time="2025-05-13T23:58:28.765161000Z" level=info msg="containerd successfully booted in 0.878870s" May 13 23:58:28.762750 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:58:28.766066 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:58:28.771310 systemd[1]: Startup finished in 784ms (firmware) + 28.262s (loader) + 887ms (kernel) + 11.650s (initrd) + 46.875s (userspace) = 1min 28.461s. May 13 23:58:29.020094 waagent[1869]: 2025-05-13T23:58:29.019937Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.021338Z INFO Daemon Daemon OS: flatcar 4284.0.0 May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.022063Z INFO Daemon Daemon Python: 3.11.11 May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.022974Z INFO Daemon Daemon Run daemon May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.023686Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.024201Z INFO Daemon Daemon Using waagent for provisioning May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.025018Z INFO Daemon Daemon Activate resource disk May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.025608Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.029811Z INFO Daemon Daemon Found device: None May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.030611Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.031291Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.032532Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:58:29.034353 waagent[1869]: 2025-05-13T23:58:29.033162Z INFO Daemon Daemon Running default provisioning handler May 13 23:58:29.052716 waagent[1869]: 2025-05-13T23:58:29.052497Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 13 23:58:29.063493 waagent[1869]: 2025-05-13T23:58:29.063425Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 13 23:58:29.071422 waagent[1869]: 2025-05-13T23:58:29.064450Z INFO Daemon Daemon cloud-init is enabled: False May 13 23:58:29.071422 waagent[1869]: 2025-05-13T23:58:29.066365Z INFO Daemon Daemon Copying ovf-env.xml May 13 23:58:29.077118 login[1871]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:58:29.079769 login[1872]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:58:29.091382 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:58:29.092728 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:58:29.096061 systemd-logind[1718]: New session 2 of user core. May 13 23:58:29.106594 systemd-logind[1718]: New session 1 of user core. May 13 23:58:29.117440 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:58:29.120146 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:58:29.130684 (systemd)[1919]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:58:29.132816 systemd-logind[1718]: New session c1 of user core. May 13 23:58:29.178479 waagent[1869]: 2025-05-13T23:58:29.177383Z INFO Daemon Daemon Successfully mounted dvd May 13 23:58:29.191221 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 13 23:58:29.192195 waagent[1869]: 2025-05-13T23:58:29.191401Z INFO Daemon Daemon Detect protocol endpoint May 13 23:58:29.193630 waagent[1869]: 2025-05-13T23:58:29.193581Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:58:29.194591 waagent[1869]: 2025-05-13T23:58:29.194554Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 13 23:58:29.197517 waagent[1869]: 2025-05-13T23:58:29.195212Z INFO Daemon Daemon Test for route to 168.63.129.16 May 13 23:58:29.197517 waagent[1869]: 2025-05-13T23:58:29.196038Z INFO Daemon Daemon Route to 168.63.129.16 exists May 13 23:58:29.197517 waagent[1869]: 2025-05-13T23:58:29.196662Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 13 23:58:29.214822 waagent[1869]: 2025-05-13T23:58:29.214780Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 13 23:58:29.221019 waagent[1869]: 2025-05-13T23:58:29.215830Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 13 23:58:29.221019 waagent[1869]: 2025-05-13T23:58:29.216346Z INFO Daemon Daemon Server preferred version:2015-04-05 May 13 23:58:29.492094 systemd[1919]: Queued start job for default target default.target. May 13 23:58:29.500613 systemd[1919]: Created slice app.slice - User Application Slice. May 13 23:58:29.500651 systemd[1919]: Reached target paths.target - Paths. May 13 23:58:29.500703 systemd[1919]: Reached target timers.target - Timers. May 13 23:58:29.501983 systemd[1919]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:58:29.512692 systemd[1919]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:58:29.512870 systemd[1919]: Reached target sockets.target - Sockets. May 13 23:58:29.513053 systemd[1919]: Reached target basic.target - Basic System. May 13 23:58:29.513113 systemd[1919]: Reached target default.target - Main User Target. May 13 23:58:29.513147 systemd[1919]: Startup finished in 374ms. May 13 23:58:29.513267 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:58:29.523732 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:58:29.524740 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:58:29.689203 waagent[1869]: 2025-05-13T23:58:29.689113Z INFO Daemon Daemon Initializing goal state during protocol detection May 13 23:58:29.692106 waagent[1869]: 2025-05-13T23:58:29.692047Z INFO Daemon Daemon Forcing an update of the goal state. May 13 23:58:29.697690 waagent[1869]: 2025-05-13T23:58:29.697646Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:58:29.729808 waagent[1869]: 2025-05-13T23:58:29.729744Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 13 23:58:29.738388 waagent[1869]: 2025-05-13T23:58:29.731346Z INFO Daemon May 13 23:58:29.738388 waagent[1869]: 2025-05-13T23:58:29.732609Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4295eec5-45ee-4cd2-8062-28b4eec9d825 eTag: 3733399832530540232 source: Fabric] May 13 23:58:29.738388 waagent[1869]: 2025-05-13T23:58:29.733910Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 13 23:58:29.738388 waagent[1869]: 2025-05-13T23:58:29.734988Z INFO Daemon May 13 23:58:29.738388 waagent[1869]: 2025-05-13T23:58:29.735784Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 13 23:58:29.745226 waagent[1869]: 2025-05-13T23:58:29.745136Z INFO Daemon Daemon Downloading artifacts profile blob May 13 23:58:29.814735 waagent[1869]: 2025-05-13T23:58:29.814662Z INFO Daemon Downloaded certificate {'thumbprint': 'FBF5F80F1ED1CA2F7D67AF50507C00794B0AD1B8', 'hasPrivateKey': False} May 13 23:58:29.823272 waagent[1869]: 2025-05-13T23:58:29.816310Z INFO Daemon Downloaded certificate {'thumbprint': 'A37CD1314A4C07D94C62FB3A67A3C34C07AB9C09', 'hasPrivateKey': True} May 13 23:58:29.823272 waagent[1869]: 2025-05-13T23:58:29.817308Z INFO Daemon Fetch goal state completed May 13 23:58:29.825992 waagent[1869]: 2025-05-13T23:58:29.825953Z INFO Daemon Daemon Starting provisioning May 13 23:58:29.831406 waagent[1869]: 2025-05-13T23:58:29.826865Z INFO Daemon Daemon Handle ovf-env.xml. May 13 23:58:29.831406 waagent[1869]: 2025-05-13T23:58:29.827568Z INFO Daemon Daemon Set hostname [ci-4284.0.0-n-1d9e750aa6] May 13 23:58:29.858923 waagent[1869]: 2025-05-13T23:58:29.858850Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-n-1d9e750aa6] May 13 23:58:29.865228 waagent[1869]: 2025-05-13T23:58:29.860098Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 13 23:58:29.865228 waagent[1869]: 2025-05-13T23:58:29.860807Z INFO Daemon Daemon Primary interface is [eth0] May 13 23:58:29.870037 systemd-networkd[1338]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:29.870046 systemd-networkd[1338]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:29.870097 systemd-networkd[1338]: eth0: DHCP lease lost May 13 23:58:29.871281 waagent[1869]: 2025-05-13T23:58:29.871208Z INFO Daemon Daemon Create user account if not exists May 13 23:58:29.884179 waagent[1869]: 2025-05-13T23:58:29.872185Z INFO Daemon Daemon User core already exists, skip useradd May 13 23:58:29.884179 waagent[1869]: 2025-05-13T23:58:29.872816Z INFO Daemon Daemon Configure sudoer May 13 23:58:29.884179 waagent[1869]: 2025-05-13T23:58:29.873746Z INFO Daemon Daemon Configure sshd May 13 23:58:29.884179 waagent[1869]: 2025-05-13T23:58:29.874364Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 13 23:58:29.884179 waagent[1869]: 2025-05-13T23:58:29.874885Z INFO Daemon Daemon Deploy ssh public key. May 13 23:58:29.917601 systemd-networkd[1338]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:58:30.973950 waagent[1869]: 2025-05-13T23:58:30.973890Z INFO Daemon Daemon Provisioning complete May 13 23:58:30.987906 waagent[1869]: 2025-05-13T23:58:30.987848Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 13 23:58:30.993760 waagent[1869]: 2025-05-13T23:58:30.988948Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 13 23:58:30.993760 waagent[1869]: 2025-05-13T23:58:30.989654Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 13 23:58:31.118214 waagent[1968]: 2025-05-13T23:58:31.118125Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 13 23:58:31.118720 waagent[1968]: 2025-05-13T23:58:31.118285Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 May 13 23:58:31.118720 waagent[1968]: 2025-05-13T23:58:31.118357Z INFO ExtHandler ExtHandler Python: 3.11.11 May 13 23:58:31.118720 waagent[1968]: 2025-05-13T23:58:31.118429Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 13 23:58:31.213525 waagent[1968]: 2025-05-13T23:58:31.213415Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 13 23:58:31.213766 waagent[1968]: 2025-05-13T23:58:31.213723Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:58:31.213855 waagent[1968]: 2025-05-13T23:58:31.213821Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:58:31.220483 waagent[1968]: 2025-05-13T23:58:31.220424Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:58:31.226004 waagent[1968]: 2025-05-13T23:58:31.225900Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 13 23:58:31.226447 waagent[1968]: 2025-05-13T23:58:31.226397Z INFO ExtHandler May 13 23:58:31.226547 waagent[1968]: 2025-05-13T23:58:31.226486Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: aaa0d38b-9f54-4bcd-9f9e-bf7f390ee6b8 eTag: 3733399832530540232 source: Fabric] May 13 23:58:31.226862 waagent[1968]: 2025-05-13T23:58:31.226815Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 13 23:58:31.227368 waagent[1968]: 2025-05-13T23:58:31.227320Z INFO ExtHandler May 13 23:58:31.227446 waagent[1968]: 2025-05-13T23:58:31.227391Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 13 23:58:31.231160 waagent[1968]: 2025-05-13T23:58:31.231122Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 13 23:58:31.299540 waagent[1968]: 2025-05-13T23:58:31.299448Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FBF5F80F1ED1CA2F7D67AF50507C00794B0AD1B8', 'hasPrivateKey': False} May 13 23:58:31.299967 waagent[1968]: 2025-05-13T23:58:31.299920Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A37CD1314A4C07D94C62FB3A67A3C34C07AB9C09', 'hasPrivateKey': True} May 13 23:58:31.300373 waagent[1968]: 2025-05-13T23:58:31.300331Z INFO ExtHandler Fetch goal state completed May 13 23:58:31.317599 waagent[1968]: 2025-05-13T23:58:31.317542Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 13 23:58:31.322357 waagent[1968]: 2025-05-13T23:58:31.322305Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1968 May 13 23:58:31.322499 waagent[1968]: 2025-05-13T23:58:31.322462Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 13 23:58:31.322855 waagent[1968]: 2025-05-13T23:58:31.322815Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 13 23:58:31.324241 waagent[1968]: 2025-05-13T23:58:31.324197Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 13 23:58:31.324658 waagent[1968]: 2025-05-13T23:58:31.324621Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 13 23:58:31.324814 waagent[1968]: 2025-05-13T23:58:31.324779Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 13 23:58:31.325389 waagent[1968]: 2025-05-13T23:58:31.325347Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 13 23:58:31.375166 waagent[1968]: 2025-05-13T23:58:31.375119Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 13 23:58:31.375383 waagent[1968]: 2025-05-13T23:58:31.375343Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 13 23:58:31.382073 waagent[1968]: 2025-05-13T23:58:31.381861Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 13 23:58:31.389134 systemd[1]: Reload requested from client PID 1985 ('systemctl') (unit waagent.service)... May 13 23:58:31.389151 systemd[1]: Reloading... May 13 23:58:31.504535 zram_generator::config[2030]: No configuration found. May 13 23:58:31.621699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:31.741588 systemd[1]: Reloading finished in 351 ms. May 13 23:58:31.760550 waagent[1968]: 2025-05-13T23:58:31.759628Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 13 23:58:31.760550 waagent[1968]: 2025-05-13T23:58:31.759803Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 13 23:58:32.352315 waagent[1968]: 2025-05-13T23:58:32.352216Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 13 23:58:32.352877 waagent[1968]: 2025-05-13T23:58:32.352756Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 13 23:58:32.353711 waagent[1968]: 2025-05-13T23:58:32.353647Z INFO ExtHandler ExtHandler Starting env monitor service. May 13 23:58:32.353851 waagent[1968]: 2025-05-13T23:58:32.353803Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:58:32.354319 waagent[1968]: 2025-05-13T23:58:32.354262Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 13 23:58:32.354460 waagent[1968]: 2025-05-13T23:58:32.354414Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:58:32.354561 waagent[1968]: 2025-05-13T23:58:32.354524Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:58:32.354730 waagent[1968]: 2025-05-13T23:58:32.354692Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:58:32.355098 waagent[1968]: 2025-05-13T23:58:32.355047Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 13 23:58:32.355347 waagent[1968]: 2025-05-13T23:58:32.355302Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 13 23:58:32.355563 waagent[1968]: 2025-05-13T23:58:32.355474Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 13 23:58:32.355732 waagent[1968]: 2025-05-13T23:58:32.355685Z INFO EnvHandler ExtHandler Configure routes May 13 23:58:32.356179 waagent[1968]: 2025-05-13T23:58:32.356131Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 13 23:58:32.356366 waagent[1968]: 2025-05-13T23:58:32.356323Z INFO EnvHandler ExtHandler Gateway:None May 13 23:58:32.356724 waagent[1968]: 2025-05-13T23:58:32.356675Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 13 23:58:32.356814 waagent[1968]: 2025-05-13T23:58:32.356745Z INFO EnvHandler ExtHandler Routes:None May 13 23:58:32.357573 waagent[1968]: 2025-05-13T23:58:32.357476Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 13 23:58:32.357925 waagent[1968]: 2025-05-13T23:58:32.357871Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 13 23:58:32.357925 waagent[1968]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 13 23:58:32.357925 waagent[1968]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 13 23:58:32.357925 waagent[1968]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 13 23:58:32.357925 waagent[1968]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 13 23:58:32.357925 waagent[1968]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:58:32.357925 waagent[1968]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:58:32.370542 waagent[1968]: 2025-05-13T23:58:32.370482Z INFO ExtHandler ExtHandler May 13 23:58:32.370623 waagent[1968]: 2025-05-13T23:58:32.370596Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 28bcf1dc-951c-4a20-b693-b069d1d3d6e5 correlation 76b10e95-a0f1-43ba-9adf-e5ef7cc07495 created: 2025-05-13T23:56:46.919128Z] May 13 23:58:32.371064 waagent[1968]: 2025-05-13T23:58:32.371019Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 13 23:58:32.372017 waagent[1968]: 2025-05-13T23:58:32.371964Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 13 23:58:32.413108 waagent[1968]: 2025-05-13T23:58:32.412922Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7992FD67-F0FB-4897-8A2C-E77FA9037832;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 13 23:58:32.437499 waagent[1968]: 2025-05-13T23:58:32.437426Z INFO MonitorHandler ExtHandler Network interfaces: May 13 23:58:32.437499 waagent[1968]: Executing ['ip', '-a', '-o', 'link']: May 13 23:58:32.437499 waagent[1968]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 13 23:58:32.437499 waagent[1968]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:41:f0:78 brd ff:ff:ff:ff:ff:ff May 13 23:58:32.437499 waagent[1968]: 3: enP36631s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:41:f0:78 brd ff:ff:ff:ff:ff:ff\ altname enP36631p0s2 May 13 23:58:32.437499 waagent[1968]: Executing ['ip', '-4', '-a', '-o', 'address']: May 13 23:58:32.437499 waagent[1968]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 13 23:58:32.437499 waagent[1968]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 13 23:58:32.437499 waagent[1968]: Executing ['ip', '-6', '-a', '-o', 'address']: May 13 23:58:32.437499 waagent[1968]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 13 23:58:32.437499 waagent[1968]: 2: eth0 inet6 fe80::7eed:8dff:fe41:f078/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:58:32.437499 waagent[1968]: 3: enP36631s1 inet6 fe80::7eed:8dff:fe41:f078/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:58:32.483528 waagent[1968]: 2025-05-13T23:58:32.483456Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 13 23:58:32.483528 waagent[1968]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:32.483528 waagent[1968]: pkts bytes target prot opt in out source destination May 13 23:58:32.483528 waagent[1968]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:32.483528 waagent[1968]: pkts bytes target prot opt in out source destination May 13 23:58:32.483528 waagent[1968]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:32.483528 waagent[1968]: pkts bytes target prot opt in out source destination May 13 23:58:32.483528 waagent[1968]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:58:32.483528 waagent[1968]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:58:32.483528 waagent[1968]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:58:32.486812 waagent[1968]: 2025-05-13T23:58:32.486757Z INFO EnvHandler ExtHandler Current Firewall rules: May 13 23:58:32.486812 waagent[1968]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:32.486812 waagent[1968]: pkts bytes target prot opt in out source destination May 13 23:58:32.486812 waagent[1968]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:32.486812 waagent[1968]: pkts bytes target prot opt in out source destination May 13 23:58:32.486812 waagent[1968]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:32.486812 waagent[1968]: pkts bytes target prot opt in out source destination May 13 23:58:32.486812 waagent[1968]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:58:32.486812 waagent[1968]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:58:32.486812 waagent[1968]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:58:32.487462 waagent[1968]: 2025-05-13T23:58:32.487376Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 13 23:58:38.451062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:58:38.452876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:38.581326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:38.593877 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:39.181217 kubelet[2122]: E0513 23:58:39.181157 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:39.184689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:39.184872 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:39.185391 systemd[1]: kubelet.service: Consumed 155ms CPU time, 104.5M memory peak. May 13 23:58:49.201618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:58:49.203521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:49.647275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:49.657005 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:49.927313 kubelet[2137]: E0513 23:58:49.927149 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:49.929752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:49.929934 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:49.930337 systemd[1]: kubelet.service: Consumed 158ms CPU time, 104.3M memory peak. May 13 23:58:49.964607 chronyd[1742]: Selected source PHC0 May 13 23:58:57.076445 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 13 23:58:59.951377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 23:58:59.953457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:00.327681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:00.337912 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:00.670590 kubelet[2152]: E0513 23:59:00.670478 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:00.672957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:00.673145 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:00.673586 systemd[1]: kubelet.service: Consumed 159ms CPU time, 101.8M memory peak. May 13 23:59:04.084221 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:59:04.085706 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.16.10:35096.service - OpenSSH per-connection server daemon (10.200.16.10:35096). May 13 23:59:04.882903 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 35096 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:04.884735 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.889722 systemd-logind[1718]: New session 3 of user core. May 13 23:59:04.896692 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:59:05.432309 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.16.10:35108.service - OpenSSH per-connection server daemon (10.200.16.10:35108). May 13 23:59:06.068436 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 35108 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:06.070218 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:06.076186 systemd-logind[1718]: New session 4 of user core. May 13 23:59:06.086711 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:59:06.512329 sshd[2168]: Connection closed by 10.200.16.10 port 35108 May 13 23:59:06.513271 sshd-session[2166]: pam_unix(sshd:session): session closed for user core May 13 23:59:06.517389 systemd[1]: sshd@1-10.200.8.4:22-10.200.16.10:35108.service: Deactivated successfully. May 13 23:59:06.519264 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:59:06.520038 systemd-logind[1718]: Session 4 logged out. Waiting for processes to exit. May 13 23:59:06.520926 systemd-logind[1718]: Removed session 4. May 13 23:59:06.623286 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.16.10:35124.service - OpenSSH per-connection server daemon (10.200.16.10:35124). May 13 23:59:07.253786 sshd[2174]: Accepted publickey for core from 10.200.16.10 port 35124 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:07.255587 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:07.260320 systemd-logind[1718]: New session 5 of user core. May 13 23:59:07.267687 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:59:07.691950 sshd[2176]: Connection closed by 10.200.16.10 port 35124 May 13 23:59:07.693917 sshd-session[2174]: pam_unix(sshd:session): session closed for user core May 13 23:59:07.696720 systemd[1]: sshd@2-10.200.8.4:22-10.200.16.10:35124.service: Deactivated successfully. May 13 23:59:07.698700 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:59:07.700175 systemd-logind[1718]: Session 5 logged out. Waiting for processes to exit. May 13 23:59:07.701129 systemd-logind[1718]: Removed session 5. May 13 23:59:07.804365 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.16.10:35134.service - OpenSSH per-connection server daemon (10.200.16.10:35134). May 13 23:59:08.444593 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 35134 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:08.446011 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:08.450211 systemd-logind[1718]: New session 6 of user core. May 13 23:59:08.453657 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:59:08.890439 sshd[2184]: Connection closed by 10.200.16.10 port 35134 May 13 23:59:08.891383 sshd-session[2182]: pam_unix(sshd:session): session closed for user core May 13 23:59:08.894559 systemd[1]: sshd@3-10.200.8.4:22-10.200.16.10:35134.service: Deactivated successfully. May 13 23:59:08.896778 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:59:08.898434 systemd-logind[1718]: Session 6 logged out. Waiting for processes to exit. May 13 23:59:08.899594 systemd-logind[1718]: Removed session 6. May 13 23:59:09.005380 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.16.10:40040.service - OpenSSH per-connection server daemon (10.200.16.10:40040). May 13 23:59:09.636868 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 40040 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:09.638366 sshd-session[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:09.642588 systemd-logind[1718]: New session 7 of user core. May 13 23:59:09.648680 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:59:10.145431 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:59:10.145801 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:10.161179 sudo[2193]: pam_unix(sudo:session): session closed for user root May 13 23:59:10.263260 sshd[2192]: Connection closed by 10.200.16.10 port 40040 May 13 23:59:10.264410 sshd-session[2190]: pam_unix(sshd:session): session closed for user core May 13 23:59:10.269411 systemd[1]: sshd@4-10.200.8.4:22-10.200.16.10:40040.service: Deactivated successfully. May 13 23:59:10.271429 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:59:10.272214 systemd-logind[1718]: Session 7 logged out. Waiting for processes to exit. May 13 23:59:10.273131 systemd-logind[1718]: Removed session 7. May 13 23:59:10.375674 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.16.10:40042.service - OpenSSH per-connection server daemon (10.200.16.10:40042). May 13 23:59:10.701130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 23:59:10.702931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:10.960484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:10.968833 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:11.008568 sshd[2199]: Accepted publickey for core from 10.200.16.10 port 40042 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:11.010039 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:11.014392 systemd-logind[1718]: New session 8 of user core. May 13 23:59:11.021678 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:59:11.352466 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:59:11.352852 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:11.356622 sudo[2217]: pam_unix(sudo:session): session closed for user root May 13 23:59:11.359535 update_engine[1719]: I20250513 23:59:11.357551 1719 update_attempter.cc:509] Updating boot flags... May 13 23:59:11.363276 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:59:11.363653 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:11.372994 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:59:11.418903 kubelet[2209]: E0513 23:59:11.418811 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:11.419245 augenrules[2241]: No rules May 13 23:59:11.419820 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:59:11.421276 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:59:11.422826 sudo[2216]: pam_unix(sudo:session): session closed for user root May 13 23:59:11.426747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:11.426919 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:11.427267 systemd[1]: kubelet.service: Consumed 152ms CPU time, 104M memory peak. May 13 23:59:11.471530 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2260) May 13 23:59:11.530730 sshd[2215]: Connection closed by 10.200.16.10 port 40042 May 13 23:59:11.530624 sshd-session[2199]: pam_unix(sshd:session): session closed for user core May 13 23:59:11.542121 systemd[1]: sshd@5-10.200.8.4:22-10.200.16.10:40042.service: Deactivated successfully. May 13 23:59:11.547364 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:59:11.550626 systemd-logind[1718]: Session 8 logged out. Waiting for processes to exit. May 13 23:59:11.554059 systemd-logind[1718]: Removed session 8. May 13 23:59:11.641387 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.16.10:40048.service - OpenSSH per-connection server daemon (10.200.16.10:40048). May 13 23:59:11.653532 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2262) May 13 23:59:11.828554 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2262) May 13 23:59:12.308958 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 40048 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:12.310675 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:12.314944 systemd-logind[1718]: New session 9 of user core. May 13 23:59:12.321674 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:59:12.653019 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:59:12.653379 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:14.435991 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:59:14.445876 (dockerd)[2434]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:59:15.635580 dockerd[2434]: time="2025-05-13T23:59:15.635498281Z" level=info msg="Starting up" May 13 23:59:15.637598 dockerd[2434]: time="2025-05-13T23:59:15.637555212Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:59:15.723897 dockerd[2434]: time="2025-05-13T23:59:15.723854012Z" level=info msg="Loading containers: start." May 13 23:59:15.943545 kernel: Initializing XFRM netlink socket May 13 23:59:16.025766 systemd-networkd[1338]: docker0: Link UP May 13 23:59:16.096021 dockerd[2434]: time="2025-05-13T23:59:16.095972414Z" level=info msg="Loading containers: done." May 13 23:59:16.115723 dockerd[2434]: time="2025-05-13T23:59:16.115673311Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:59:16.115911 dockerd[2434]: time="2025-05-13T23:59:16.115773612Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:59:16.115969 dockerd[2434]: time="2025-05-13T23:59:16.115910514Z" level=info msg="Daemon has completed initialization" May 13 23:59:16.163713 dockerd[2434]: time="2025-05-13T23:59:16.163649333Z" level=info msg="API listen on /run/docker.sock" May 13 23:59:16.164054 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:59:17.457190 containerd[1734]: time="2025-05-13T23:59:17.456846203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:59:18.173413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559122907.mount: Deactivated successfully. May 13 23:59:19.717698 containerd[1734]: time="2025-05-13T23:59:19.717633041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:19.721146 containerd[1734]: time="2025-05-13T23:59:19.721077493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 13 23:59:19.725639 containerd[1734]: time="2025-05-13T23:59:19.725583960Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:19.732377 containerd[1734]: time="2025-05-13T23:59:19.732321762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:19.733261 containerd[1734]: time="2025-05-13T23:59:19.733226976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.276332871s" May 13 23:59:19.733827 containerd[1734]: time="2025-05-13T23:59:19.733365878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 23:59:19.734105 containerd[1734]: time="2025-05-13T23:59:19.734036588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:59:21.334804 containerd[1734]: time="2025-05-13T23:59:21.334743687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:21.338772 containerd[1734]: time="2025-05-13T23:59:21.338698347Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 13 23:59:21.341714 containerd[1734]: time="2025-05-13T23:59:21.341652791Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:21.349018 containerd[1734]: time="2025-05-13T23:59:21.348956601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:21.350030 containerd[1734]: time="2025-05-13T23:59:21.349880315Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.615808327s" May 13 23:59:21.350030 containerd[1734]: time="2025-05-13T23:59:21.349919316Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 23:59:21.350787 containerd[1734]: time="2025-05-13T23:59:21.350622827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:59:21.451291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 13 23:59:21.453217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:21.585270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:21.595836 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:22.180524 kubelet[2693]: E0513 23:59:22.180458 2693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:22.182917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:22.183130 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:22.183528 systemd[1]: kubelet.service: Consumed 154ms CPU time, 104.1M memory peak. May 13 23:59:23.322255 containerd[1734]: time="2025-05-13T23:59:23.322201813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:23.324364 containerd[1734]: time="2025-05-13T23:59:23.324294344Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 13 23:59:23.328094 containerd[1734]: time="2025-05-13T23:59:23.328038198Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:23.333874 containerd[1734]: time="2025-05-13T23:59:23.333841584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:23.334944 containerd[1734]: time="2025-05-13T23:59:23.334723497Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.984065069s" May 13 23:59:23.334944 containerd[1734]: time="2025-05-13T23:59:23.334764197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 23:59:23.335696 containerd[1734]: time="2025-05-13T23:59:23.335669210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:59:24.488011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560434310.mount: Deactivated successfully. May 13 23:59:25.007610 containerd[1734]: time="2025-05-13T23:59:25.007559149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:25.010611 containerd[1734]: time="2025-05-13T23:59:25.010528192Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 13 23:59:25.013282 containerd[1734]: time="2025-05-13T23:59:25.013213632Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:25.019113 containerd[1734]: time="2025-05-13T23:59:25.019046117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:25.019855 containerd[1734]: time="2025-05-13T23:59:25.019702527Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.684000716s" May 13 23:59:25.019855 containerd[1734]: time="2025-05-13T23:59:25.019742727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 23:59:25.020499 containerd[1734]: time="2025-05-13T23:59:25.020475538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:59:25.614486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194694998.mount: Deactivated successfully. May 13 23:59:26.808168 containerd[1734]: time="2025-05-13T23:59:26.808110375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:26.810446 containerd[1734]: time="2025-05-13T23:59:26.810374808Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 13 23:59:26.813034 containerd[1734]: time="2025-05-13T23:59:26.812969646Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:26.817354 containerd[1734]: time="2025-05-13T23:59:26.817281110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:26.818406 containerd[1734]: time="2025-05-13T23:59:26.818252424Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.797631184s" May 13 23:59:26.818406 containerd[1734]: time="2025-05-13T23:59:26.818294525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 23:59:26.819089 containerd[1734]: time="2025-05-13T23:59:26.819056136Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:59:27.353160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055049887.mount: Deactivated successfully. May 13 23:59:27.374295 containerd[1734]: time="2025-05-13T23:59:27.374232684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:27.376779 containerd[1734]: time="2025-05-13T23:59:27.376708420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 13 23:59:27.384369 containerd[1734]: time="2025-05-13T23:59:27.384291432Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:27.388256 containerd[1734]: time="2025-05-13T23:59:27.388199689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:27.389007 containerd[1734]: time="2025-05-13T23:59:27.388838198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 569.738162ms" May 13 23:59:27.389007 containerd[1734]: time="2025-05-13T23:59:27.388876499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:59:27.389703 containerd[1734]: time="2025-05-13T23:59:27.389647610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:59:27.934783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494781718.mount: Deactivated successfully. May 13 23:59:30.159724 containerd[1734]: time="2025-05-13T23:59:30.159646736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:30.164568 containerd[1734]: time="2025-05-13T23:59:30.164486709Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 13 23:59:30.168397 containerd[1734]: time="2025-05-13T23:59:30.168310467Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:30.173373 containerd[1734]: time="2025-05-13T23:59:30.173306243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:30.174678 containerd[1734]: time="2025-05-13T23:59:30.174278658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.784577447s" May 13 23:59:30.174678 containerd[1734]: time="2025-05-13T23:59:30.174319358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 23:59:32.201357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 13 23:59:32.205725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:32.425686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:32.433834 (kubelet)[2850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:32.495850 kubelet[2850]: E0513 23:59:32.495703 2850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:32.499314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:32.499668 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:32.500123 systemd[1]: kubelet.service: Consumed 189ms CPU time, 103.8M memory peak. May 13 23:59:33.436591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:33.436815 systemd[1]: kubelet.service: Consumed 189ms CPU time, 103.8M memory peak. May 13 23:59:33.439262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:33.478060 systemd[1]: Reload requested from client PID 2865 ('systemctl') (unit session-9.scope)... May 13 23:59:33.478076 systemd[1]: Reloading... May 13 23:59:33.614536 zram_generator::config[2915]: No configuration found. May 13 23:59:33.729378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:33.854812 systemd[1]: Reloading finished in 376 ms. May 13 23:59:33.953425 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:59:33.953599 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:59:33.954030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:33.958158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:35.030365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:35.039929 (kubelet)[2979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:59:35.077497 kubelet[2979]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:35.077497 kubelet[2979]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:59:35.077497 kubelet[2979]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:35.077497 kubelet[2979]: I0513 23:59:35.076780 2979 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:59:36.211436 kubelet[2979]: I0513 23:59:36.211387 2979 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:59:36.211436 kubelet[2979]: I0513 23:59:36.211419 2979 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:59:36.211949 kubelet[2979]: I0513 23:59:36.211802 2979 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:59:36.237261 kubelet[2979]: E0513 23:59:36.237212 2979 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" May 13 23:59:36.238389 kubelet[2979]: I0513 23:59:36.238357 2979 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:59:36.246856 kubelet[2979]: I0513 23:59:36.246797 2979 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:59:36.252442 kubelet[2979]: I0513 23:59:36.252407 2979 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:59:36.254172 kubelet[2979]: I0513 23:59:36.253399 2979 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:59:36.254172 kubelet[2979]: I0513 23:59:36.253459 2979 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-1d9e750aa6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:59:36.254172 kubelet[2979]: I0513 23:59:36.253805 2979 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:59:36.254172 kubelet[2979]: I0513 23:59:36.253818 2979 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:59:36.254465 kubelet[2979]: I0513 23:59:36.253966 2979 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:36.258544 kubelet[2979]: I0513 23:59:36.258305 2979 kubelet.go:446] "Attempting to sync node with API server" May 13 23:59:36.258544 kubelet[2979]: I0513 23:59:36.258347 2979 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:59:36.258544 kubelet[2979]: I0513 23:59:36.258381 2979 kubelet.go:352] "Adding apiserver pod source" May 13 23:59:36.258544 kubelet[2979]: I0513 23:59:36.258394 2979 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:59:36.263766 kubelet[2979]: I0513 23:59:36.263728 2979 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:59:36.264445 kubelet[2979]: I0513 23:59:36.264315 2979 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:59:36.265733 kubelet[2979]: W0513 23:59:36.265000 2979 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:59:36.267466 kubelet[2979]: I0513 23:59:36.266982 2979 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:59:36.267466 kubelet[2979]: I0513 23:59:36.267023 2979 server.go:1287] "Started kubelet" May 13 23:59:36.267466 kubelet[2979]: W0513 23:59:36.267179 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused May 13 23:59:36.267466 kubelet[2979]: E0513 23:59:36.267238 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" May 13 23:59:36.267466 kubelet[2979]: W0513 23:59:36.267321 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-1d9e750aa6&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused May 13 23:59:36.267466 kubelet[2979]: E0513 23:59:36.267358 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-1d9e750aa6&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" May 13 23:59:36.272912 kubelet[2979]: I0513 23:59:36.272873 2979 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:59:36.273193 kubelet[2979]: I0513 23:59:36.273020 2979 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:59:36.274876 kubelet[2979]: I0513 23:59:36.274855 2979 server.go:490] "Adding debug handlers to kubelet server" May 13 23:59:36.275944 kubelet[2979]: I0513 23:59:36.275887 2979 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:59:36.277148 kubelet[2979]: I0513 23:59:36.276128 2979 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:59:36.278009 kubelet[2979]: E0513 23:59:36.277984 2979 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:59:36.278455 kubelet[2979]: I0513 23:59:36.278428 2979 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:59:36.280404 kubelet[2979]: I0513 23:59:36.280387 2979 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:59:36.280768 kubelet[2979]: E0513 23:59:36.280743 2979 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" May 13 23:59:36.282496 kubelet[2979]: E0513 23:59:36.281155 2979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-1d9e750aa6.183f3ba8b7c92954 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-1d9e750aa6,UID:ci-4284.0.0-n-1d9e750aa6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-1d9e750aa6,},FirstTimestamp:2025-05-13 23:59:36.267000148 +0000 UTC m=+1.223337272,LastTimestamp:2025-05-13 23:59:36.267000148 +0000 UTC m=+1.223337272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-1d9e750aa6,}" May 13 23:59:36.282707 kubelet[2979]: E0513 23:59:36.282665 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-1d9e750aa6?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" May 13 23:59:36.282945 kubelet[2979]: I0513 23:59:36.282919 2979 reconciler.go:26] "Reconciler: start to sync state" May 13 23:59:36.283012 kubelet[2979]: I0513 23:59:36.282967 2979 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:59:36.283378 kubelet[2979]: W0513 23:59:36.283327 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused May 13 23:59:36.283452 kubelet[2979]: E0513 23:59:36.283392 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" May 13 23:59:36.285825 kubelet[2979]: I0513 23:59:36.285805 2979 factory.go:221] Registration of the containerd container factory successfully May 13 23:59:36.285825 kubelet[2979]: I0513 23:59:36.285824 2979 factory.go:221] Registration of the systemd container factory successfully May 13 23:59:36.285933 kubelet[2979]: I0513 23:59:36.285906 2979 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:59:36.293702 kubelet[2979]: I0513 23:59:36.293548 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:59:36.295531 kubelet[2979]: I0513 23:59:36.295181 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:59:36.295531 kubelet[2979]: I0513 23:59:36.295213 2979 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:59:36.295531 kubelet[2979]: I0513 23:59:36.295242 2979 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:59:36.295531 kubelet[2979]: I0513 23:59:36.295254 2979 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:59:36.295531 kubelet[2979]: E0513 23:59:36.295308 2979 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:59:36.303054 kubelet[2979]: W0513 23:59:36.303001 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused May 13 23:59:36.303180 kubelet[2979]: E0513 23:59:36.303156 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" May 13 23:59:36.321079 kubelet[2979]: I0513 23:59:36.321053 2979 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:59:36.321253 kubelet[2979]: I0513 23:59:36.321096 2979 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:59:36.321253 kubelet[2979]: I0513 23:59:36.321117 2979 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:36.325641 kubelet[2979]: I0513 23:59:36.325614 2979 policy_none.go:49] "None policy: Start" May 13 23:59:36.325641 kubelet[2979]: I0513 23:59:36.325638 2979 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:59:36.325790 kubelet[2979]: I0513 23:59:36.325651 2979 state_mem.go:35] "Initializing new in-memory state store" May 13 23:59:36.334387 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:59:36.345550 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:59:36.348883 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:59:36.356347 kubelet[2979]: I0513 23:59:36.356315 2979 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:59:36.356927 kubelet[2979]: I0513 23:59:36.356549 2979 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:59:36.356927 kubelet[2979]: I0513 23:59:36.356570 2979 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:59:36.356927 kubelet[2979]: I0513 23:59:36.356826 2979 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:59:36.358640 kubelet[2979]: E0513 23:59:36.358529 2979 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:59:36.358640 kubelet[2979]: E0513 23:59:36.358580 2979 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-1d9e750aa6\" not found" May 13 23:59:36.406410 systemd[1]: Created slice kubepods-burstable-pod4cb9b100f08d20b9081615b917964084.slice - libcontainer container kubepods-burstable-pod4cb9b100f08d20b9081615b917964084.slice. May 13 23:59:36.420853 kubelet[2979]: E0513 23:59:36.420818 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.424547 systemd[1]: Created slice kubepods-burstable-podefabe998443ace0e4ccbccdbd6b694a3.slice - libcontainer container kubepods-burstable-podefabe998443ace0e4ccbccdbd6b694a3.slice. May 13 23:59:36.426761 kubelet[2979]: E0513 23:59:36.426734 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.428706 systemd[1]: Created slice kubepods-burstable-pod104282df4f4d1cf3e839cc7313e4127d.slice - libcontainer container kubepods-burstable-pod104282df4f4d1cf3e839cc7313e4127d.slice. May 13 23:59:36.430351 kubelet[2979]: E0513 23:59:36.430326 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.459469 kubelet[2979]: I0513 23:59:36.459426 2979 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.459911 kubelet[2979]: E0513 23:59:36.459872 2979 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.483528 kubelet[2979]: E0513 23:59:36.483384 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-1d9e750aa6?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" May 13 23:59:36.485006 kubelet[2979]: I0513 23:59:36.484636 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4cb9b100f08d20b9081615b917964084-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" (UID: \"4cb9b100f08d20b9081615b917964084\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485006 kubelet[2979]: I0513 23:59:36.484679 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4cb9b100f08d20b9081615b917964084-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" (UID: \"4cb9b100f08d20b9081615b917964084\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485006 kubelet[2979]: I0513 23:59:36.484712 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4cb9b100f08d20b9081615b917964084-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" (UID: \"4cb9b100f08d20b9081615b917964084\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485006 kubelet[2979]: I0513 23:59:36.484744 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485006 kubelet[2979]: I0513 23:59:36.484777 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efabe998443ace0e4ccbccdbd6b694a3-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-1d9e750aa6\" (UID: \"efabe998443ace0e4ccbccdbd6b694a3\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485365 kubelet[2979]: I0513 23:59:36.484818 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485365 kubelet[2979]: I0513 23:59:36.484852 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485365 kubelet[2979]: I0513 23:59:36.484881 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.485365 kubelet[2979]: I0513 23:59:36.484912 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.662368 kubelet[2979]: I0513 23:59:36.662299 2979 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.662752 kubelet[2979]: E0513 23:59:36.662713 2979 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:36.722599 containerd[1734]: time="2025-05-13T23:59:36.722429061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-1d9e750aa6,Uid:4cb9b100f08d20b9081615b917964084,Namespace:kube-system,Attempt:0,}" May 13 23:59:36.728124 containerd[1734]: time="2025-05-13T23:59:36.728078847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-1d9e750aa6,Uid:efabe998443ace0e4ccbccdbd6b694a3,Namespace:kube-system,Attempt:0,}" May 13 23:59:36.731978 containerd[1734]: time="2025-05-13T23:59:36.731943706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-1d9e750aa6,Uid:104282df4f4d1cf3e839cc7313e4127d,Namespace:kube-system,Attempt:0,}" May 13 23:59:36.808427 containerd[1734]: time="2025-05-13T23:59:36.807716056Z" level=info msg="connecting to shim 8cbfa8c54c5218ab3b25629f4c7931ef49fbc70622f6b96f345e1ebc010eb68f" address="unix:///run/containerd/s/2eb1e838e20c6ab805009cf76f56852c1f5717a8c466e2688edcad83cfde011f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:36.835937 containerd[1734]: time="2025-05-13T23:59:36.835881984Z" level=info msg="connecting to shim 7ac6a2cd9ff042b6a98fae61ec3d878971b025bb251b986fe94bf495002e0ba8" address="unix:///run/containerd/s/43336a33c2de09094ddb79cfd5e4cf386524d35136de97684dcf3ae2cee8010d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:36.855431 containerd[1734]: time="2025-05-13T23:59:36.855377280Z" level=info msg="connecting to shim b58343769225cad3ff65fa3789db9f8aaef37ae6cadf51567b3327e21425fab6" address="unix:///run/containerd/s/9efef92fd69d1c3450e75522b775bdde1e9b3f3a33d6d0bd9d3f032180fce5b5" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:36.864711 systemd[1]: Started cri-containerd-8cbfa8c54c5218ab3b25629f4c7931ef49fbc70622f6b96f345e1ebc010eb68f.scope - libcontainer container 8cbfa8c54c5218ab3b25629f4c7931ef49fbc70622f6b96f345e1ebc010eb68f. May 13 23:59:36.884597 kubelet[2979]: E0513 23:59:36.884546 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-1d9e750aa6?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" May 13 23:59:36.892721 systemd[1]: Started cri-containerd-7ac6a2cd9ff042b6a98fae61ec3d878971b025bb251b986fe94bf495002e0ba8.scope - libcontainer container 7ac6a2cd9ff042b6a98fae61ec3d878971b025bb251b986fe94bf495002e0ba8. May 13 23:59:36.899073 systemd[1]: Started cri-containerd-b58343769225cad3ff65fa3789db9f8aaef37ae6cadf51567b3327e21425fab6.scope - libcontainer container b58343769225cad3ff65fa3789db9f8aaef37ae6cadf51567b3327e21425fab6. May 13 23:59:36.949387 containerd[1734]: time="2025-05-13T23:59:36.949320306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-1d9e750aa6,Uid:4cb9b100f08d20b9081615b917964084,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cbfa8c54c5218ab3b25629f4c7931ef49fbc70622f6b96f345e1ebc010eb68f\"" May 13 23:59:36.955812 containerd[1734]: time="2025-05-13T23:59:36.955671602Z" level=info msg="CreateContainer within sandbox \"8cbfa8c54c5218ab3b25629f4c7931ef49fbc70622f6b96f345e1ebc010eb68f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:59:36.976091 containerd[1734]: time="2025-05-13T23:59:36.976041911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-1d9e750aa6,Uid:104282df4f4d1cf3e839cc7313e4127d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ac6a2cd9ff042b6a98fae61ec3d878971b025bb251b986fe94bf495002e0ba8\"" May 13 23:59:36.980675 containerd[1734]: time="2025-05-13T23:59:36.980473879Z" level=info msg="CreateContainer within sandbox \"7ac6a2cd9ff042b6a98fae61ec3d878971b025bb251b986fe94bf495002e0ba8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:59:36.983356 containerd[1734]: time="2025-05-13T23:59:36.983286421Z" level=info msg="Container ddf9466bbd023f6dba1b10c48541c88a081f9eca5e56eedb2a59b00de8bfabec: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:37.007566 containerd[1734]: time="2025-05-13T23:59:37.007496389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-1d9e750aa6,Uid:efabe998443ace0e4ccbccdbd6b694a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b58343769225cad3ff65fa3789db9f8aaef37ae6cadf51567b3327e21425fab6\"" May 13 23:59:37.009090 containerd[1734]: time="2025-05-13T23:59:37.009039712Z" level=info msg="CreateContainer within sandbox \"8cbfa8c54c5218ab3b25629f4c7931ef49fbc70622f6b96f345e1ebc010eb68f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ddf9466bbd023f6dba1b10c48541c88a081f9eca5e56eedb2a59b00de8bfabec\"" May 13 23:59:37.011376 containerd[1734]: time="2025-05-13T23:59:37.011319847Z" level=info msg="StartContainer for \"ddf9466bbd023f6dba1b10c48541c88a081f9eca5e56eedb2a59b00de8bfabec\"" May 13 23:59:37.013333 containerd[1734]: time="2025-05-13T23:59:37.013296177Z" level=info msg="connecting to shim ddf9466bbd023f6dba1b10c48541c88a081f9eca5e56eedb2a59b00de8bfabec" address="unix:///run/containerd/s/2eb1e838e20c6ab805009cf76f56852c1f5717a8c466e2688edcad83cfde011f" protocol=ttrpc version=3 May 13 23:59:37.016610 containerd[1734]: time="2025-05-13T23:59:37.016579827Z" level=info msg="CreateContainer within sandbox \"b58343769225cad3ff65fa3789db9f8aaef37ae6cadf51567b3327e21425fab6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:59:37.027030 containerd[1734]: time="2025-05-13T23:59:37.025890368Z" level=info msg="Container 061dde03e0d8a904e66b77639ca250a7c3869b9f1cac23e62ab21ed1b1961f4a: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:37.030719 systemd[1]: Started cri-containerd-ddf9466bbd023f6dba1b10c48541c88a081f9eca5e56eedb2a59b00de8bfabec.scope - libcontainer container ddf9466bbd023f6dba1b10c48541c88a081f9eca5e56eedb2a59b00de8bfabec. May 13 23:59:37.046030 containerd[1734]: time="2025-05-13T23:59:37.045986673Z" level=info msg="Container 07c86d8728c9219c59351d35dc450146463d76a4f6244df349a38d0e984d2c62: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:37.054593 containerd[1734]: time="2025-05-13T23:59:37.054440702Z" level=info msg="CreateContainer within sandbox \"7ac6a2cd9ff042b6a98fae61ec3d878971b025bb251b986fe94bf495002e0ba8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"061dde03e0d8a904e66b77639ca250a7c3869b9f1cac23e62ab21ed1b1961f4a\"" May 13 23:59:37.055051 containerd[1734]: time="2025-05-13T23:59:37.055024010Z" level=info msg="StartContainer for \"061dde03e0d8a904e66b77639ca250a7c3869b9f1cac23e62ab21ed1b1961f4a\"" May 13 23:59:37.056597 containerd[1734]: time="2025-05-13T23:59:37.056377931Z" level=info msg="connecting to shim 061dde03e0d8a904e66b77639ca250a7c3869b9f1cac23e62ab21ed1b1961f4a" address="unix:///run/containerd/s/43336a33c2de09094ddb79cfd5e4cf386524d35136de97684dcf3ae2cee8010d" protocol=ttrpc version=3 May 13 23:59:37.065638 kubelet[2979]: I0513 23:59:37.065092 2979 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:37.066931 kubelet[2979]: E0513 23:59:37.066722 2979 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:37.074307 containerd[1734]: time="2025-05-13T23:59:37.073750595Z" level=info msg="CreateContainer within sandbox \"b58343769225cad3ff65fa3789db9f8aaef37ae6cadf51567b3327e21425fab6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07c86d8728c9219c59351d35dc450146463d76a4f6244df349a38d0e984d2c62\"" May 13 23:59:37.075214 containerd[1734]: time="2025-05-13T23:59:37.075189817Z" level=info msg="StartContainer for \"07c86d8728c9219c59351d35dc450146463d76a4f6244df349a38d0e984d2c62\"" May 13 23:59:37.076861 containerd[1734]: time="2025-05-13T23:59:37.076791541Z" level=info msg="connecting to shim 07c86d8728c9219c59351d35dc450146463d76a4f6244df349a38d0e984d2c62" address="unix:///run/containerd/s/9efef92fd69d1c3450e75522b775bdde1e9b3f3a33d6d0bd9d3f032180fce5b5" protocol=ttrpc version=3 May 13 23:59:37.085722 systemd[1]: Started cri-containerd-061dde03e0d8a904e66b77639ca250a7c3869b9f1cac23e62ab21ed1b1961f4a.scope - libcontainer container 061dde03e0d8a904e66b77639ca250a7c3869b9f1cac23e62ab21ed1b1961f4a. May 13 23:59:37.109255 systemd[1]: Started cri-containerd-07c86d8728c9219c59351d35dc450146463d76a4f6244df349a38d0e984d2c62.scope - libcontainer container 07c86d8728c9219c59351d35dc450146463d76a4f6244df349a38d0e984d2c62. May 13 23:59:37.122403 containerd[1734]: time="2025-05-13T23:59:37.122173130Z" level=info msg="StartContainer for \"ddf9466bbd023f6dba1b10c48541c88a081f9eca5e56eedb2a59b00de8bfabec\" returns successfully" May 13 23:59:37.186118 containerd[1734]: time="2025-05-13T23:59:37.185854997Z" level=info msg="StartContainer for \"061dde03e0d8a904e66b77639ca250a7c3869b9f1cac23e62ab21ed1b1961f4a\" returns successfully" May 13 23:59:37.261710 containerd[1734]: time="2025-05-13T23:59:37.261628447Z" level=info msg="StartContainer for \"07c86d8728c9219c59351d35dc450146463d76a4f6244df349a38d0e984d2c62\" returns successfully" May 13 23:59:37.322649 kubelet[2979]: E0513 23:59:37.322488 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:37.324917 kubelet[2979]: E0513 23:59:37.324886 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:37.328214 kubelet[2979]: E0513 23:59:37.328188 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:37.871050 kubelet[2979]: I0513 23:59:37.871018 2979 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:38.333124 kubelet[2979]: E0513 23:59:38.333080 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:38.334131 kubelet[2979]: E0513 23:59:38.334104 2979 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.264149 kubelet[2979]: I0513 23:59:39.264110 2979 apiserver.go:52] "Watching apiserver" May 13 23:59:39.338557 kubelet[2979]: E0513 23:59:39.338477 2979 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-1d9e750aa6\" not found" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.383343 kubelet[2979]: I0513 23:59:39.383257 2979 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:59:39.492565 kubelet[2979]: I0513 23:59:39.492526 2979 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.582122 kubelet[2979]: I0513 23:59:39.581968 2979 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.591157 kubelet[2979]: E0513 23:59:39.591103 2979 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.591157 kubelet[2979]: I0513 23:59:39.591151 2979 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.592869 kubelet[2979]: E0513 23:59:39.592714 2979 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-1d9e750aa6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.592869 kubelet[2979]: I0513 23:59:39.592745 2979 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.594477 kubelet[2979]: E0513 23:59:39.594420 2979 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.638349 kubelet[2979]: I0513 23:59:39.638306 2979 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:39.641604 kubelet[2979]: E0513 23:59:39.640938 2979 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:40.658552 kubelet[2979]: I0513 23:59:40.658478 2979 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:40.668153 kubelet[2979]: W0513 23:59:40.668115 2979 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:41.739521 systemd[1]: Reload requested from client PID 3247 ('systemctl') (unit session-9.scope)... May 13 23:59:41.739540 systemd[1]: Reloading... May 13 23:59:41.862537 zram_generator::config[3294]: No configuration found. May 13 23:59:42.034159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:42.165006 systemd[1]: Reloading finished in 424 ms. May 13 23:59:42.196400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:42.216838 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:59:42.217077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:42.217131 systemd[1]: kubelet.service: Consumed 818ms CPU time, 124.4M memory peak. May 13 23:59:42.219388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:52.385462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:52.396862 (kubelet)[3361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:59:52.444322 kubelet[3361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:52.444322 kubelet[3361]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:59:52.444322 kubelet[3361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:52.444833 kubelet[3361]: I0513 23:59:52.444427 3361 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:59:52.450686 kubelet[3361]: I0513 23:59:52.450649 3361 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:59:52.450686 kubelet[3361]: I0513 23:59:52.450673 3361 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:59:52.451008 kubelet[3361]: I0513 23:59:52.450984 3361 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:59:52.452208 kubelet[3361]: I0513 23:59:52.452182 3361 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:59:52.455638 kubelet[3361]: I0513 23:59:52.454729 3361 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:59:52.458878 kubelet[3361]: I0513 23:59:52.458837 3361 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:59:52.464026 kubelet[3361]: I0513 23:59:52.463999 3361 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:59:52.466035 kubelet[3361]: I0513 23:59:52.464455 3361 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:59:52.466035 kubelet[3361]: I0513 23:59:52.464488 3361 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-1d9e750aa6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:59:52.466035 kubelet[3361]: I0513 23:59:52.464718 3361 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:59:52.466035 kubelet[3361]: I0513 23:59:52.464728 3361 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:59:52.466349 kubelet[3361]: I0513 23:59:52.464772 3361 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:52.466349 kubelet[3361]: I0513 23:59:52.464908 3361 kubelet.go:446] "Attempting to sync node with API server" May 13 23:59:52.466349 kubelet[3361]: I0513 23:59:52.464920 3361 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:59:52.466349 kubelet[3361]: I0513 23:59:52.464938 3361 kubelet.go:352] "Adding apiserver pod source" May 13 23:59:52.466349 kubelet[3361]: I0513 23:59:52.464951 3361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:59:52.467028 kubelet[3361]: I0513 23:59:52.467011 3361 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:59:52.467575 kubelet[3361]: I0513 23:59:52.467559 3361 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:59:52.468153 kubelet[3361]: I0513 23:59:52.468135 3361 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:59:52.468266 kubelet[3361]: I0513 23:59:52.468253 3361 server.go:1287] "Started kubelet" May 13 23:59:52.470917 kubelet[3361]: I0513 23:59:52.470900 3361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:59:52.477444 kubelet[3361]: I0513 23:59:52.477418 3361 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:59:52.478720 kubelet[3361]: I0513 23:59:52.478704 3361 server.go:490] "Adding debug handlers to kubelet server" May 13 23:59:52.479974 kubelet[3361]: I0513 23:59:52.479926 3361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:59:52.480224 kubelet[3361]: I0513 23:59:52.480212 3361 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:59:52.480640 kubelet[3361]: I0513 23:59:52.480623 3361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:59:52.482522 kubelet[3361]: I0513 23:59:52.482476 3361 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:59:52.482851 kubelet[3361]: E0513 23:59:52.482828 3361 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-1d9e750aa6\" not found" May 13 23:59:52.489836 kubelet[3361]: I0513 23:59:52.489818 3361 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:59:52.490036 kubelet[3361]: I0513 23:59:52.490026 3361 reconciler.go:26] "Reconciler: start to sync state" May 13 23:59:52.493418 kubelet[3361]: I0513 23:59:52.493392 3361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:59:52.494452 kubelet[3361]: I0513 23:59:52.494425 3361 factory.go:221] Registration of the systemd container factory successfully May 13 23:59:52.494598 kubelet[3361]: I0513 23:59:52.494547 3361 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:59:52.499363 kubelet[3361]: I0513 23:59:52.499054 3361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:59:52.499363 kubelet[3361]: I0513 23:59:52.499082 3361 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:59:52.499363 kubelet[3361]: I0513 23:59:52.499104 3361 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:59:52.499363 kubelet[3361]: I0513 23:59:52.499113 3361 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:59:52.499363 kubelet[3361]: E0513 23:59:52.499163 3361 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:59:52.511706 kubelet[3361]: E0513 23:59:52.511668 3361 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:59:52.513558 kubelet[3361]: I0513 23:59:52.513537 3361 factory.go:221] Registration of the containerd container factory successfully May 13 23:59:52.563112 kubelet[3361]: I0513 23:59:52.563084 3361 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:59:52.563266 kubelet[3361]: I0513 23:59:52.563245 3361 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:59:52.563266 kubelet[3361]: I0513 23:59:52.563271 3361 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:52.563473 kubelet[3361]: I0513 23:59:52.563455 3361 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:59:52.563567 kubelet[3361]: I0513 23:59:52.563471 3361 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:59:52.563567 kubelet[3361]: I0513 23:59:52.563494 3361 policy_none.go:49] "None policy: Start" May 13 23:59:52.563567 kubelet[3361]: I0513 23:59:52.563522 3361 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:59:52.563567 kubelet[3361]: I0513 23:59:52.563536 3361 state_mem.go:35] "Initializing new in-memory state store" May 13 23:59:52.563725 kubelet[3361]: I0513 23:59:52.563673 3361 state_mem.go:75] "Updated machine memory state" May 13 23:59:52.568402 kubelet[3361]: I0513 23:59:52.567310 3361 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:59:52.568402 kubelet[3361]: I0513 23:59:52.567483 3361 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:59:52.568402 kubelet[3361]: I0513 23:59:52.567496 3361 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:59:52.568402 kubelet[3361]: I0513 23:59:52.568322 3361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:59:52.572270 kubelet[3361]: E0513 23:59:52.570167 3361 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:59:52.599954 kubelet[3361]: I0513 23:59:52.599899 3361 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.600967 kubelet[3361]: I0513 23:59:52.600201 3361 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.600967 kubelet[3361]: I0513 23:59:52.600449 3361 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.613087 kubelet[3361]: W0513 23:59:52.612553 3361 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:52.613087 kubelet[3361]: W0513 23:59:52.612936 3361 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:52.614861 kubelet[3361]: W0513 23:59:52.614834 3361 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:52.615025 kubelet[3361]: E0513 23:59:52.615010 3361 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.670972 kubelet[3361]: I0513 23:59:52.670847 3361 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.683136 kubelet[3361]: I0513 23:59:52.683100 3361 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.683282 kubelet[3361]: I0513 23:59:52.683184 3361 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.683282 kubelet[3361]: I0513 23:59:52.683228 3361 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:59:52.683738 containerd[1734]: time="2025-05-13T23:59:52.683698240Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:59:52.684530 kubelet[3361]: I0513 23:59:52.683946 3361 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:59:52.691116 kubelet[3361]: I0513 23:59:52.691092 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4cb9b100f08d20b9081615b917964084-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" (UID: \"4cb9b100f08d20b9081615b917964084\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691232 kubelet[3361]: I0513 23:59:52.691124 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4cb9b100f08d20b9081615b917964084-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" (UID: \"4cb9b100f08d20b9081615b917964084\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691232 kubelet[3361]: I0513 23:59:52.691151 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691232 kubelet[3361]: I0513 23:59:52.691173 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691232 kubelet[3361]: I0513 23:59:52.691195 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691232 kubelet[3361]: I0513 23:59:52.691220 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691421 kubelet[3361]: I0513 23:59:52.691243 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4cb9b100f08d20b9081615b917964084-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" (UID: \"4cb9b100f08d20b9081615b917964084\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691421 kubelet[3361]: I0513 23:59:52.691266 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efabe998443ace0e4ccbccdbd6b694a3-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-1d9e750aa6\" (UID: \"efabe998443ace0e4ccbccdbd6b694a3\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:52.691421 kubelet[3361]: I0513 23:59:52.691290 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/104282df4f4d1cf3e839cc7313e4127d-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-1d9e750aa6\" (UID: \"104282df4f4d1cf3e839cc7313e4127d\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:53.468331 kubelet[3361]: I0513 23:59:53.468282 3361 apiserver.go:52] "Watching apiserver" May 13 23:59:54.796716 kubelet[3361]: I0513 23:59:53.496252 3361 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:59:54.796716 kubelet[3361]: I0513 23:59:53.496615 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" podStartSLOduration=1.496599758 podStartE2EDuration="1.496599758s" podCreationTimestamp="2025-05-13 23:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:53.496590258 +0000 UTC m=+1.096018139" watchObservedRunningTime="2025-05-13 23:59:53.496599758 +0000 UTC m=+1.096027639" May 13 23:59:54.796716 kubelet[3361]: I0513 23:59:53.507771 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-1d9e750aa6" podStartSLOduration=1.5077534240000001 podStartE2EDuration="1.507753424s" podCreationTimestamp="2025-05-13 23:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:53.507543721 +0000 UTC m=+1.106971602" watchObservedRunningTime="2025-05-13 23:59:53.507753424 +0000 UTC m=+1.107181405" May 13 23:59:54.796716 kubelet[3361]: I0513 23:59:53.546908 3361 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:54.796716 kubelet[3361]: I0513 23:59:53.547820 3361 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:54.796716 kubelet[3361]: W0513 23:59:53.559883 3361 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:54.796716 kubelet[3361]: E0513 23:59:53.559945 3361 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-1d9e750aa6\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:53.477327 systemd[1]: Created slice kubepods-besteffort-podb54dff3e_8006_4557_a4b4_237af6ef1a27.slice - libcontainer container kubepods-besteffort-podb54dff3e_8006_4557_a4b4_237af6ef1a27.slice. May 13 23:59:54.802880 kubelet[3361]: I0513 23:59:53.561172 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-1d9e750aa6" podStartSLOduration=13.56115622 podStartE2EDuration="13.56115622s" podCreationTimestamp="2025-05-13 23:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:53.531115472 +0000 UTC m=+1.130543453" watchObservedRunningTime="2025-05-13 23:59:53.56115622 +0000 UTC m=+1.160584101" May 13 23:59:54.802880 kubelet[3361]: W0513 23:59:53.561340 3361 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:54.802880 kubelet[3361]: E0513 23:59:53.561412 3361 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-1d9e750aa6\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-n-1d9e750aa6" May 13 23:59:54.802880 kubelet[3361]: I0513 23:59:53.596369 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b54dff3e-8006-4557-a4b4-237af6ef1a27-xtables-lock\") pod \"kube-proxy-frhhf\" (UID: \"b54dff3e-8006-4557-a4b4-237af6ef1a27\") " pod="kube-system/kube-proxy-frhhf" May 13 23:59:54.802880 kubelet[3361]: I0513 23:59:53.596420 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9td\" (UniqueName: \"kubernetes.io/projected/b54dff3e-8006-4557-a4b4-237af6ef1a27-kube-api-access-jn9td\") pod \"kube-proxy-frhhf\" (UID: \"b54dff3e-8006-4557-a4b4-237af6ef1a27\") " pod="kube-system/kube-proxy-frhhf" May 13 23:59:54.802880 kubelet[3361]: I0513 23:59:53.596444 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b54dff3e-8006-4557-a4b4-237af6ef1a27-kube-proxy\") pod \"kube-proxy-frhhf\" (UID: \"b54dff3e-8006-4557-a4b4-237af6ef1a27\") " pod="kube-system/kube-proxy-frhhf" May 13 23:59:54.803317 kubelet[3361]: I0513 23:59:53.596465 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b54dff3e-8006-4557-a4b4-237af6ef1a27-lib-modules\") pod \"kube-proxy-frhhf\" (UID: \"b54dff3e-8006-4557-a4b4-237af6ef1a27\") " pod="kube-system/kube-proxy-frhhf" May 13 23:59:54.819821 sudo[3396]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:59:54.820215 sudo[3396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:59:55.099751 containerd[1734]: time="2025-05-13T23:59:55.099612071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-frhhf,Uid:b54dff3e-8006-4557-a4b4-237af6ef1a27,Namespace:kube-system,Attempt:0,}" May 13 23:59:55.165818 containerd[1734]: time="2025-05-13T23:59:55.165705957Z" level=info msg="connecting to shim 9528802eb0cc053d33680bbbae90b375852032d00af569a13e8c9289c28fa3ca" address="unix:///run/containerd/s/9278a4f3a6dc5d174eebd7cd2f31e4608e5df0ea77301986b988b5db3712956c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:55.203741 systemd[1]: Started cri-containerd-9528802eb0cc053d33680bbbae90b375852032d00af569a13e8c9289c28fa3ca.scope - libcontainer container 9528802eb0cc053d33680bbbae90b375852032d00af569a13e8c9289c28fa3ca. May 13 23:59:55.248901 containerd[1734]: time="2025-05-13T23:59:55.248853898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-frhhf,Uid:b54dff3e-8006-4557-a4b4-237af6ef1a27,Namespace:kube-system,Attempt:0,} returns sandbox id \"9528802eb0cc053d33680bbbae90b375852032d00af569a13e8c9289c28fa3ca\"" May 13 23:59:55.254818 containerd[1734]: time="2025-05-13T23:59:55.254678585Z" level=info msg="CreateContainer within sandbox \"9528802eb0cc053d33680bbbae90b375852032d00af569a13e8c9289c28fa3ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:59:55.280709 containerd[1734]: time="2025-05-13T23:59:55.280096364Z" level=info msg="Container 4557de479024325b9014c2a7ae3c1d31dcc8764cc54176b214a2628fead87325: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:55.289286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121632366.mount: Deactivated successfully. May 13 23:59:55.309343 containerd[1734]: time="2025-05-13T23:59:55.309256699Z" level=info msg="CreateContainer within sandbox \"9528802eb0cc053d33680bbbae90b375852032d00af569a13e8c9289c28fa3ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4557de479024325b9014c2a7ae3c1d31dcc8764cc54176b214a2628fead87325\"" May 13 23:59:55.313416 containerd[1734]: time="2025-05-13T23:59:55.311047026Z" level=info msg="StartContainer for \"4557de479024325b9014c2a7ae3c1d31dcc8764cc54176b214a2628fead87325\"" May 13 23:59:55.314211 containerd[1734]: time="2025-05-13T23:59:55.314175173Z" level=info msg="connecting to shim 4557de479024325b9014c2a7ae3c1d31dcc8764cc54176b214a2628fead87325" address="unix:///run/containerd/s/9278a4f3a6dc5d174eebd7cd2f31e4608e5df0ea77301986b988b5db3712956c" protocol=ttrpc version=3 May 13 23:59:55.341690 systemd[1]: Started cri-containerd-4557de479024325b9014c2a7ae3c1d31dcc8764cc54176b214a2628fead87325.scope - libcontainer container 4557de479024325b9014c2a7ae3c1d31dcc8764cc54176b214a2628fead87325. May 13 23:59:55.421619 containerd[1734]: time="2025-05-13T23:59:55.421040068Z" level=info msg="StartContainer for \"4557de479024325b9014c2a7ae3c1d31dcc8764cc54176b214a2628fead87325\" returns successfully" May 13 23:59:55.425241 sudo[3396]: pam_unix(sudo:session): session closed for user root May 13 23:59:55.570847 kubelet[3361]: I0513 23:59:55.570572 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-frhhf" podStartSLOduration=2.570548299 podStartE2EDuration="2.570548299s" podCreationTimestamp="2025-05-13 23:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:55.570292595 +0000 UTC m=+3.169720476" watchObservedRunningTime="2025-05-13 23:59:55.570548299 +0000 UTC m=+3.169976280" May 13 23:59:55.952578 kubelet[3361]: W0513 23:59:55.951345 3361 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4284.0.0-n-1d9e750aa6" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284.0.0-n-1d9e750aa6' and this object May 13 23:59:55.952578 kubelet[3361]: E0513 23:59:55.951396 3361 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4284.0.0-n-1d9e750aa6\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-1d9e750aa6' and this object" logger="UnhandledError" May 13 23:59:55.952578 kubelet[3361]: W0513 23:59:55.951562 3361 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4284.0.0-n-1d9e750aa6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284.0.0-n-1d9e750aa6' and this object May 13 23:59:55.952578 kubelet[3361]: E0513 23:59:55.951588 3361 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4284.0.0-n-1d9e750aa6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-1d9e750aa6' and this object" logger="UnhandledError" May 13 23:59:55.952578 kubelet[3361]: I0513 23:59:55.951770 3361 status_manager.go:890] "Failed to get status for pod" podUID="ef58f594-6563-4ba4-8d64-e1d9d3132abe" pod="kube-system/cilium-gjg7d" err="pods \"cilium-gjg7d\" is forbidden: User \"system:node:ci-4284.0.0-n-1d9e750aa6\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-1d9e750aa6' and this object" May 13 23:59:55.954405 systemd[1]: Created slice kubepods-burstable-podef58f594_6563_4ba4_8d64_e1d9d3132abe.slice - libcontainer container kubepods-burstable-podef58f594_6563_4ba4_8d64_e1d9d3132abe.slice. May 13 23:59:55.966229 systemd[1]: Created slice kubepods-besteffort-poddd77439c_55b7_4ed7_afe7_4df2fd32f8c0.slice - libcontainer container kubepods-besteffort-poddd77439c_55b7_4ed7_afe7_4df2fd32f8c0.slice. May 13 23:59:56.114288 kubelet[3361]: I0513 23:59:56.114232 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-bpf-maps\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114288 kubelet[3361]: I0513 23:59:56.114284 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-config-path\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114567 kubelet[3361]: I0513 23:59:56.114313 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwh5r\" (UniqueName: \"kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-kube-api-access-vwh5r\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114567 kubelet[3361]: I0513 23:59:56.114340 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hostproc\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114567 kubelet[3361]: I0513 23:59:56.114365 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-cgroup\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114567 kubelet[3361]: I0513 23:59:56.114390 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-xtables-lock\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114567 kubelet[3361]: I0513 23:59:56.114413 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-kernel\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114835 kubelet[3361]: I0513 23:59:56.114439 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4qlw\" (UniqueName: \"kubernetes.io/projected/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-kube-api-access-c4qlw\") pod \"cilium-operator-6c4d7847fc-8q2fr\" (UID: \"dd77439c-55b7-4ed7-afe7-4df2fd32f8c0\") " pod="kube-system/cilium-operator-6c4d7847fc-8q2fr" May 13 23:59:56.114835 kubelet[3361]: I0513 23:59:56.114468 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-lib-modules\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114835 kubelet[3361]: I0513 23:59:56.114498 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8q2fr\" (UID: \"dd77439c-55b7-4ed7-afe7-4df2fd32f8c0\") " pod="kube-system/cilium-operator-6c4d7847fc-8q2fr" May 13 23:59:56.114835 kubelet[3361]: I0513 23:59:56.114558 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef58f594-6563-4ba4-8d64-e1d9d3132abe-clustermesh-secrets\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.114835 kubelet[3361]: I0513 23:59:56.114584 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hubble-tls\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.115040 kubelet[3361]: I0513 23:59:56.114610 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cni-path\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.115040 kubelet[3361]: I0513 23:59:56.114653 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-run\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.115040 kubelet[3361]: I0513 23:59:56.114680 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-etc-cni-netd\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:56.115040 kubelet[3361]: I0513 23:59:56.114718 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-net\") pod \"cilium-gjg7d\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " pod="kube-system/cilium-gjg7d" May 13 23:59:57.461338 containerd[1734]: time="2025-05-13T23:59:57.461291815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjg7d,Uid:ef58f594-6563-4ba4-8d64-e1d9d3132abe,Namespace:kube-system,Attempt:0,}" May 13 23:59:57.475587 containerd[1734]: time="2025-05-13T23:59:57.475392625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8q2fr,Uid:dd77439c-55b7-4ed7-afe7-4df2fd32f8c0,Namespace:kube-system,Attempt:0,}" May 13 23:59:57.533702 containerd[1734]: time="2025-05-13T23:59:57.533545693Z" level=info msg="connecting to shim eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a" address="unix:///run/containerd/s/88a127e9e23fbf3006e9c5c9673e441457dee6ef232f8f13c1dc123483231865" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:57.565374 containerd[1734]: time="2025-05-13T23:59:57.565221266Z" level=info msg="connecting to shim bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736" address="unix:///run/containerd/s/abcc1b712423745545463b33149223a32e10c36c7eeeb123888b36949da46698" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:57.572732 systemd[1]: Started cri-containerd-eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a.scope - libcontainer container eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a. May 13 23:59:57.604715 systemd[1]: Started cri-containerd-bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736.scope - libcontainer container bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736. May 13 23:59:57.625000 containerd[1734]: time="2025-05-13T23:59:57.624904557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjg7d,Uid:ef58f594-6563-4ba4-8d64-e1d9d3132abe,Namespace:kube-system,Attempt:0,} returns sandbox id \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\"" May 13 23:59:57.629121 containerd[1734]: time="2025-05-13T23:59:57.629031418Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:59:57.666357 containerd[1734]: time="2025-05-13T23:59:57.666311674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8q2fr,Uid:dd77439c-55b7-4ed7-afe7-4df2fd32f8c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\"" May 14 00:00:00.893356 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:00.979031 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:03.199722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162929898.mount: Deactivated successfully. May 14 00:00:05.430207 containerd[1734]: time="2025-05-14T00:00:05.430148248Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.432511 containerd[1734]: time="2025-05-14T00:00:05.432421380Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 00:00:05.436210 containerd[1734]: time="2025-05-14T00:00:05.436152834Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.437683 containerd[1734]: time="2025-05-14T00:00:05.437490353Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.808402034s" May 14 00:00:05.437683 containerd[1734]: time="2025-05-14T00:00:05.437547554Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 00:00:05.439192 containerd[1734]: time="2025-05-14T00:00:05.438903374Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:00:05.440340 containerd[1734]: time="2025-05-14T00:00:05.440302694Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:00:05.475267 containerd[1734]: time="2025-05-14T00:00:05.475214697Z" level=info msg="Container e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:05.494029 containerd[1734]: time="2025-05-14T00:00:05.493987468Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\"" May 14 00:00:05.494706 containerd[1734]: time="2025-05-14T00:00:05.494670678Z" level=info msg="StartContainer for \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\"" May 14 00:00:05.495766 containerd[1734]: time="2025-05-14T00:00:05.495722793Z" level=info msg="connecting to shim e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75" address="unix:///run/containerd/s/88a127e9e23fbf3006e9c5c9673e441457dee6ef232f8f13c1dc123483231865" protocol=ttrpc version=3 May 14 00:00:05.516235 systemd[1]: Started cri-containerd-e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75.scope - libcontainer container e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75. May 14 00:00:05.549475 containerd[1734]: time="2025-05-14T00:00:05.549336066Z" level=info msg="StartContainer for \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" returns successfully" May 14 00:00:05.557805 systemd[1]: cri-containerd-e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75.scope: Deactivated successfully. May 14 00:00:05.560079 containerd[1734]: time="2025-05-14T00:00:05.559876218Z" level=info msg="received exit event container_id:\"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" id:\"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" pid:3749 exited_at:{seconds:1747180805 nanos:558829703}" May 14 00:00:05.560079 containerd[1734]: time="2025-05-14T00:00:05.560071721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" id:\"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" pid:3749 exited_at:{seconds:1747180805 nanos:558829703}" May 14 00:00:05.588328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75-rootfs.mount: Deactivated successfully. May 14 00:00:09.590051 containerd[1734]: time="2025-05-14T00:00:09.589973637Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:00:09.619254 containerd[1734]: time="2025-05-14T00:00:09.619209558Z" level=info msg="Container e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:09.626228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706735542.mount: Deactivated successfully. May 14 00:00:09.642987 containerd[1734]: time="2025-05-14T00:00:09.642946101Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\"" May 14 00:00:09.643646 containerd[1734]: time="2025-05-14T00:00:09.643612510Z" level=info msg="StartContainer for \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\"" May 14 00:00:09.644579 containerd[1734]: time="2025-05-14T00:00:09.644545424Z" level=info msg="connecting to shim e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055" address="unix:///run/containerd/s/88a127e9e23fbf3006e9c5c9673e441457dee6ef232f8f13c1dc123483231865" protocol=ttrpc version=3 May 14 00:00:09.669675 systemd[1]: Started cri-containerd-e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055.scope - libcontainer container e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055. May 14 00:00:09.709960 containerd[1734]: time="2025-05-14T00:00:09.709913466Z" level=info msg="StartContainer for \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" returns successfully" May 14 00:00:09.721071 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:00:09.721392 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:09.721987 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 00:00:09.726174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:00:09.728959 containerd[1734]: time="2025-05-14T00:00:09.728080128Z" level=info msg="received exit event container_id:\"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" id:\"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" pid:3800 exited_at:{seconds:1747180809 nanos:727847325}" May 14 00:00:09.728959 containerd[1734]: time="2025-05-14T00:00:09.728434034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" id:\"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" pid:3800 exited_at:{seconds:1747180809 nanos:727847325}" May 14 00:00:09.729616 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:00:09.730498 systemd[1]: cri-containerd-e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055.scope: Deactivated successfully. May 14 00:00:09.758545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:10.596381 containerd[1734]: time="2025-05-14T00:00:10.594483706Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:00:10.619655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055-rootfs.mount: Deactivated successfully. May 14 00:00:10.636657 containerd[1734]: time="2025-05-14T00:00:10.636613633Z" level=info msg="Container 19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:10.662003 containerd[1734]: time="2025-05-14T00:00:10.661949110Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\"" May 14 00:00:10.662586 containerd[1734]: time="2025-05-14T00:00:10.662543319Z" level=info msg="StartContainer for \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\"" May 14 00:00:10.665792 containerd[1734]: time="2025-05-14T00:00:10.665759066Z" level=info msg="connecting to shim 19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d" address="unix:///run/containerd/s/88a127e9e23fbf3006e9c5c9673e441457dee6ef232f8f13c1dc123483231865" protocol=ttrpc version=3 May 14 00:00:10.730818 systemd[1]: Started cri-containerd-19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d.scope - libcontainer container 19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d. May 14 00:00:10.772963 systemd[1]: cri-containerd-19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d.scope: Deactivated successfully. May 14 00:00:10.776899 containerd[1734]: time="2025-05-14T00:00:10.776864519Z" level=info msg="received exit event container_id:\"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" id:\"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" pid:3850 exited_at:{seconds:1747180810 nanos:776664216}" May 14 00:00:10.777251 containerd[1734]: time="2025-05-14T00:00:10.777141823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" id:\"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" pid:3850 exited_at:{seconds:1747180810 nanos:776664216}" May 14 00:00:10.779995 containerd[1734]: time="2025-05-14T00:00:10.779972265Z" level=info msg="StartContainer for \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" returns successfully" May 14 00:00:11.599562 containerd[1734]: time="2025-05-14T00:00:11.599319051Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:00:11.617190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904637244.mount: Deactivated successfully. May 14 00:00:11.617322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d-rootfs.mount: Deactivated successfully. May 14 00:00:11.630534 containerd[1734]: time="2025-05-14T00:00:11.628194481Z" level=info msg="Container daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:11.634401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680390139.mount: Deactivated successfully. May 14 00:00:11.654364 containerd[1734]: time="2025-05-14T00:00:11.654314969Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\"" May 14 00:00:11.654926 containerd[1734]: time="2025-05-14T00:00:11.654892878Z" level=info msg="StartContainer for \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\"" May 14 00:00:11.656763 containerd[1734]: time="2025-05-14T00:00:11.656724805Z" level=info msg="connecting to shim daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b" address="unix:///run/containerd/s/88a127e9e23fbf3006e9c5c9673e441457dee6ef232f8f13c1dc123483231865" protocol=ttrpc version=3 May 14 00:00:11.683730 systemd[1]: Started cri-containerd-daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b.scope - libcontainer container daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b. May 14 00:00:11.714554 systemd[1]: cri-containerd-daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b.scope: Deactivated successfully. May 14 00:00:11.717137 containerd[1734]: time="2025-05-14T00:00:11.717067403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" id:\"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" pid:3892 exited_at:{seconds:1747180811 nanos:716155089}" May 14 00:00:11.719530 containerd[1734]: time="2025-05-14T00:00:11.719021132Z" level=info msg="received exit event container_id:\"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" id:\"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" pid:3892 exited_at:{seconds:1747180811 nanos:716155089}" May 14 00:00:11.734297 containerd[1734]: time="2025-05-14T00:00:11.734261058Z" level=info msg="StartContainer for \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" returns successfully" May 14 00:00:11.749384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b-rootfs.mount: Deactivated successfully. May 14 00:00:12.605608 containerd[1734]: time="2025-05-14T00:00:12.605544617Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:00:12.644342 containerd[1734]: time="2025-05-14T00:00:12.641320349Z" level=info msg="Container 4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:12.647950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161196236.mount: Deactivated successfully. May 14 00:00:12.702641 containerd[1734]: time="2025-05-14T00:00:12.702588860Z" level=info msg="CreateContainer within sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\"" May 14 00:00:12.703407 containerd[1734]: time="2025-05-14T00:00:12.703339671Z" level=info msg="StartContainer for \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\"" May 14 00:00:12.705353 containerd[1734]: time="2025-05-14T00:00:12.705320501Z" level=info msg="connecting to shim 4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910" address="unix:///run/containerd/s/88a127e9e23fbf3006e9c5c9673e441457dee6ef232f8f13c1dc123483231865" protocol=ttrpc version=3 May 14 00:00:12.731711 systemd[1]: Started cri-containerd-4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910.scope - libcontainer container 4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910. May 14 00:00:12.801522 containerd[1734]: time="2025-05-14T00:00:12.801454531Z" level=info msg="StartContainer for \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" returns successfully" May 14 00:00:12.940834 containerd[1734]: time="2025-05-14T00:00:12.940689802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"c3701f9f833dfd3abf51fa472f5e177f5d1f32da83741aaf221ee3fa8e1a79da\" pid:3962 exited_at:{seconds:1747180812 nanos:938983276}" May 14 00:00:13.046888 kubelet[3361]: I0514 00:00:13.046680 3361 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 00:00:13.128100 systemd[1]: Created slice kubepods-burstable-pod769ff77c_7592_4196_9839_106c4c04cf4e.slice - libcontainer container kubepods-burstable-pod769ff77c_7592_4196_9839_106c4c04cf4e.slice. May 14 00:00:13.142156 systemd[1]: Created slice kubepods-burstable-poddf02a164_745a_4fd1_932a_c5b1d96abde1.slice - libcontainer container kubepods-burstable-poddf02a164_745a_4fd1_932a_c5b1d96abde1.slice. May 14 00:00:13.234761 kubelet[3361]: I0514 00:00:13.234097 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df02a164-745a-4fd1-932a-c5b1d96abde1-config-volume\") pod \"coredns-668d6bf9bc-rbpb5\" (UID: \"df02a164-745a-4fd1-932a-c5b1d96abde1\") " pod="kube-system/coredns-668d6bf9bc-rbpb5" May 14 00:00:13.234761 kubelet[3361]: I0514 00:00:13.234169 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mcn\" (UniqueName: \"kubernetes.io/projected/df02a164-745a-4fd1-932a-c5b1d96abde1-kube-api-access-l8mcn\") pod \"coredns-668d6bf9bc-rbpb5\" (UID: \"df02a164-745a-4fd1-932a-c5b1d96abde1\") " pod="kube-system/coredns-668d6bf9bc-rbpb5" May 14 00:00:13.234761 kubelet[3361]: I0514 00:00:13.234208 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/769ff77c-7592-4196-9839-106c4c04cf4e-config-volume\") pod \"coredns-668d6bf9bc-7bmc8\" (UID: \"769ff77c-7592-4196-9839-106c4c04cf4e\") " pod="kube-system/coredns-668d6bf9bc-7bmc8" May 14 00:00:13.234761 kubelet[3361]: I0514 00:00:13.234237 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdm9d\" (UniqueName: \"kubernetes.io/projected/769ff77c-7592-4196-9839-106c4c04cf4e-kube-api-access-pdm9d\") pod \"coredns-668d6bf9bc-7bmc8\" (UID: \"769ff77c-7592-4196-9839-106c4c04cf4e\") " pod="kube-system/coredns-668d6bf9bc-7bmc8" May 14 00:00:13.436786 containerd[1734]: time="2025-05-14T00:00:13.436693279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bmc8,Uid:769ff77c-7592-4196-9839-106c4c04cf4e,Namespace:kube-system,Attempt:0,}" May 14 00:00:13.469352 containerd[1734]: time="2025-05-14T00:00:13.469215662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rbpb5,Uid:df02a164-745a-4fd1-932a-c5b1d96abde1,Namespace:kube-system,Attempt:0,}" May 14 00:00:13.653906 kubelet[3361]: I0514 00:00:13.653760 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gjg7d" podStartSLOduration=10.841687518 podStartE2EDuration="18.653634005s" podCreationTimestamp="2025-05-13 23:59:55 +0000 UTC" firstStartedPulling="2025-05-13 23:59:57.626769384 +0000 UTC m=+5.226197265" lastFinishedPulling="2025-05-14 00:00:05.438715871 +0000 UTC m=+13.038143752" observedRunningTime="2025-05-14 00:00:13.652081982 +0000 UTC m=+21.251509963" watchObservedRunningTime="2025-05-14 00:00:13.653634005 +0000 UTC m=+21.253062086" May 14 00:00:13.811288 containerd[1734]: time="2025-05-14T00:00:13.811233349Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:13.813301 containerd[1734]: time="2025-05-14T00:00:13.813226879Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 00:00:13.818046 containerd[1734]: time="2025-05-14T00:00:13.817969449Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:13.819384 containerd[1734]: time="2025-05-14T00:00:13.819220268Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.380278494s" May 14 00:00:13.819384 containerd[1734]: time="2025-05-14T00:00:13.819272269Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 00:00:13.821977 containerd[1734]: time="2025-05-14T00:00:13.821936708Z" level=info msg="CreateContainer within sandbox \"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:00:13.841029 containerd[1734]: time="2025-05-14T00:00:13.839898176Z" level=info msg="Container d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:13.854646 containerd[1734]: time="2025-05-14T00:00:13.854607894Z" level=info msg="CreateContainer within sandbox \"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\"" May 14 00:00:13.856605 containerd[1734]: time="2025-05-14T00:00:13.855107102Z" level=info msg="StartContainer for \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\"" May 14 00:00:13.856605 containerd[1734]: time="2025-05-14T00:00:13.856485222Z" level=info msg="connecting to shim d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9" address="unix:///run/containerd/s/abcc1b712423745545463b33149223a32e10c36c7eeeb123888b36949da46698" protocol=ttrpc version=3 May 14 00:00:13.881680 systemd[1]: Started cri-containerd-d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9.scope - libcontainer container d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9. May 14 00:00:13.913836 containerd[1734]: time="2025-05-14T00:00:13.913646372Z" level=info msg="StartContainer for \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" returns successfully" May 14 00:00:14.334603 containerd[1734]: time="2025-05-14T00:00:14.334553033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"218aa5a5799d12c8376632cbd23af3c4b6c6934cb7e384956ede66e6b96b424b\" pid:4105 exit_status:1 exited_at:{seconds:1747180814 nanos:333948524}" May 14 00:00:16.424571 containerd[1734]: time="2025-05-14T00:00:16.424526717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"c9772d0233c1219ea5d02c0365439f6ec17d24e3be635039e6152279780ab227\" pid:4129 exit_status:1 exited_at:{seconds:1747180816 nanos:423813406}" May 14 00:00:17.383734 systemd-networkd[1338]: cilium_host: Link UP May 14 00:00:17.383970 systemd-networkd[1338]: cilium_net: Link UP May 14 00:00:17.384223 systemd-networkd[1338]: cilium_net: Gained carrier May 14 00:00:17.384446 systemd-networkd[1338]: cilium_host: Gained carrier May 14 00:00:17.686668 systemd-networkd[1338]: cilium_vxlan: Link UP May 14 00:00:17.686681 systemd-networkd[1338]: cilium_vxlan: Gained carrier May 14 00:00:17.945546 kernel: NET: Registered PF_ALG protocol family May 14 00:00:17.993630 systemd-networkd[1338]: cilium_net: Gained IPv6LL May 14 00:00:18.185672 systemd-networkd[1338]: cilium_host: Gained IPv6LL May 14 00:00:18.553714 containerd[1734]: time="2025-05-14T00:00:18.553605275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"b93361248159f9aed86bd2e2cdd1dbbe1b5489cff57a156a7282b2111b9ce3a8\" pid:4410 exit_status:1 exited_at:{seconds:1747180818 nanos:552534559}" May 14 00:00:18.726618 systemd-networkd[1338]: lxc_health: Link UP May 14 00:00:18.735771 systemd-networkd[1338]: lxc_health: Gained carrier May 14 00:00:18.989239 systemd-networkd[1338]: lxc87e92ee651fa: Link UP May 14 00:00:18.999613 kernel: eth0: renamed from tmp08709 May 14 00:00:19.007218 systemd-networkd[1338]: lxc87e92ee651fa: Gained carrier May 14 00:00:19.039592 kernel: eth0: renamed from tmp3200d May 14 00:00:19.046381 systemd-networkd[1338]: lxcadd55ba38608: Link UP May 14 00:00:19.047382 systemd-networkd[1338]: lxcadd55ba38608: Gained carrier May 14 00:00:19.082636 systemd-networkd[1338]: cilium_vxlan: Gained IPv6LL May 14 00:00:19.497941 kubelet[3361]: I0514 00:00:19.497871 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8q2fr" podStartSLOduration=8.345502519 podStartE2EDuration="24.497837204s" podCreationTimestamp="2025-05-13 23:59:55 +0000 UTC" firstStartedPulling="2025-05-13 23:59:57.667846997 +0000 UTC m=+5.267274878" lastFinishedPulling="2025-05-14 00:00:13.820181582 +0000 UTC m=+21.419609563" observedRunningTime="2025-05-14 00:00:14.685592054 +0000 UTC m=+22.285019935" watchObservedRunningTime="2025-05-14 00:00:19.497837204 +0000 UTC m=+27.097265185" May 14 00:00:20.361799 systemd-networkd[1338]: lxcadd55ba38608: Gained IPv6LL May 14 00:00:20.745977 systemd-networkd[1338]: lxc_health: Gained IPv6LL May 14 00:00:20.750575 containerd[1734]: time="2025-05-14T00:00:20.750499016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"073a87e897b0ede7f88bb4c3274f67a30626e602ed1bdb235da33c6826fdde83\" pid:4549 exited_at:{seconds:1747180820 nanos:749854806}" May 14 00:00:21.001724 systemd-networkd[1338]: lxc87e92ee651fa: Gained IPv6LL May 14 00:00:22.932354 containerd[1734]: time="2025-05-14T00:00:22.932305633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"49ee0a93c15add1d6b960fd66bdb81c65de90c0983be32e0de5bdef0cee3d781\" pid:4576 exited_at:{seconds:1747180822 nanos:931696623}" May 14 00:00:23.094535 containerd[1734]: time="2025-05-14T00:00:23.093582529Z" level=info msg="connecting to shim 08709ecd9f3a73fcf325deb31a83f2fbe29237e47a49c30f64c0ddbb2a7afa95" address="unix:///run/containerd/s/f1c8a79ce53a340e0834ba78119370b3d45e0e75d2ea598c409c7dd6e3fb0d5e" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:23.149708 systemd[1]: Started cri-containerd-08709ecd9f3a73fcf325deb31a83f2fbe29237e47a49c30f64c0ddbb2a7afa95.scope - libcontainer container 08709ecd9f3a73fcf325deb31a83f2fbe29237e47a49c30f64c0ddbb2a7afa95. May 14 00:00:23.155557 containerd[1734]: time="2025-05-14T00:00:23.155231945Z" level=info msg="connecting to shim 3200df716a8a87889c78ed8b89ec7073bbd185a72729fcdad23d715d32a984f9" address="unix:///run/containerd/s/95679ce204f0bc6cbcbff08a463b6142fc78fbcb4fd7c83da9a28b967574adf4" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:23.193822 systemd[1]: Started cri-containerd-3200df716a8a87889c78ed8b89ec7073bbd185a72729fcdad23d715d32a984f9.scope - libcontainer container 3200df716a8a87889c78ed8b89ec7073bbd185a72729fcdad23d715d32a984f9. May 14 00:00:23.248569 containerd[1734]: time="2025-05-14T00:00:23.248414329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bmc8,Uid:769ff77c-7592-4196-9839-106c4c04cf4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"08709ecd9f3a73fcf325deb31a83f2fbe29237e47a49c30f64c0ddbb2a7afa95\"" May 14 00:00:23.252820 containerd[1734]: time="2025-05-14T00:00:23.252673992Z" level=info msg="CreateContainer within sandbox \"08709ecd9f3a73fcf325deb31a83f2fbe29237e47a49c30f64c0ddbb2a7afa95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:23.304298 containerd[1734]: time="2025-05-14T00:00:23.304251659Z" level=info msg="Container 16e483d99f8302a0f7b97748cd6d2b728af28fc3b78fc043a183b608a9151407: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:23.323439 containerd[1734]: time="2025-05-14T00:00:23.323399943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rbpb5,Uid:df02a164-745a-4fd1-932a-c5b1d96abde1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3200df716a8a87889c78ed8b89ec7073bbd185a72729fcdad23d715d32a984f9\"" May 14 00:00:23.324006 containerd[1734]: time="2025-05-14T00:00:23.323974652Z" level=info msg="CreateContainer within sandbox \"08709ecd9f3a73fcf325deb31a83f2fbe29237e47a49c30f64c0ddbb2a7afa95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16e483d99f8302a0f7b97748cd6d2b728af28fc3b78fc043a183b608a9151407\"" May 14 00:00:23.325008 containerd[1734]: time="2025-05-14T00:00:23.324983967Z" level=info msg="StartContainer for \"16e483d99f8302a0f7b97748cd6d2b728af28fc3b78fc043a183b608a9151407\"" May 14 00:00:23.326689 containerd[1734]: time="2025-05-14T00:00:23.326668792Z" level=info msg="connecting to shim 16e483d99f8302a0f7b97748cd6d2b728af28fc3b78fc043a183b608a9151407" address="unix:///run/containerd/s/f1c8a79ce53a340e0834ba78119370b3d45e0e75d2ea598c409c7dd6e3fb0d5e" protocol=ttrpc version=3 May 14 00:00:23.328432 containerd[1734]: time="2025-05-14T00:00:23.328404618Z" level=info msg="CreateContainer within sandbox \"3200df716a8a87889c78ed8b89ec7073bbd185a72729fcdad23d715d32a984f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:23.350663 systemd[1]: Started cri-containerd-16e483d99f8302a0f7b97748cd6d2b728af28fc3b78fc043a183b608a9151407.scope - libcontainer container 16e483d99f8302a0f7b97748cd6d2b728af28fc3b78fc043a183b608a9151407. May 14 00:00:23.359205 containerd[1734]: time="2025-05-14T00:00:23.359156374Z" level=info msg="Container 45fda0adf9538525d446f042f932725b2bf55b79fbf06750031017422d2eef3b: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:23.388595 containerd[1734]: time="2025-05-14T00:00:23.387452895Z" level=info msg="CreateContainer within sandbox \"3200df716a8a87889c78ed8b89ec7073bbd185a72729fcdad23d715d32a984f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45fda0adf9538525d446f042f932725b2bf55b79fbf06750031017422d2eef3b\"" May 14 00:00:23.390056 containerd[1734]: time="2025-05-14T00:00:23.390013933Z" level=info msg="StartContainer for \"16e483d99f8302a0f7b97748cd6d2b728af28fc3b78fc043a183b608a9151407\" returns successfully" May 14 00:00:23.390653 containerd[1734]: time="2025-05-14T00:00:23.390478340Z" level=info msg="StartContainer for \"45fda0adf9538525d446f042f932725b2bf55b79fbf06750031017422d2eef3b\"" May 14 00:00:23.394917 containerd[1734]: time="2025-05-14T00:00:23.394778304Z" level=info msg="connecting to shim 45fda0adf9538525d446f042f932725b2bf55b79fbf06750031017422d2eef3b" address="unix:///run/containerd/s/95679ce204f0bc6cbcbff08a463b6142fc78fbcb4fd7c83da9a28b967574adf4" protocol=ttrpc version=3 May 14 00:00:23.424818 systemd[1]: Started cri-containerd-45fda0adf9538525d446f042f932725b2bf55b79fbf06750031017422d2eef3b.scope - libcontainer container 45fda0adf9538525d446f042f932725b2bf55b79fbf06750031017422d2eef3b. May 14 00:00:23.471523 containerd[1734]: time="2025-05-14T00:00:23.471462543Z" level=info msg="StartContainer for \"45fda0adf9538525d446f042f932725b2bf55b79fbf06750031017422d2eef3b\" returns successfully" May 14 00:00:23.660087 kubelet[3361]: I0514 00:00:23.658906 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rbpb5" podStartSLOduration=30.658883828 podStartE2EDuration="30.658883828s" podCreationTimestamp="2025-05-13 23:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:23.657952814 +0000 UTC m=+31.257380795" watchObservedRunningTime="2025-05-14 00:00:23.658883828 +0000 UTC m=+31.258311709" May 14 00:00:23.707352 kubelet[3361]: I0514 00:00:23.707268 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7bmc8" podStartSLOduration=30.707246546 podStartE2EDuration="30.707246546s" podCreationTimestamp="2025-05-13 23:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:23.677062998 +0000 UTC m=+31.276490879" watchObservedRunningTime="2025-05-14 00:00:23.707246546 +0000 UTC m=+31.306674527" May 14 00:00:25.031361 containerd[1734]: time="2025-05-14T00:00:25.031309319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"e3c93b68edcc5a77c2f47cdac8b443fc76798f4e08c2d8c95201d2bf1456fdaa\" pid:4766 exited_at:{seconds:1747180825 nanos:30849012}" May 14 00:00:25.184933 containerd[1734]: time="2025-05-14T00:00:25.184803699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"37f9ee83ecf5d949fca4702b1ff6034e7b4b779657453a7f5f2af233751a225c\" pid:4797 exited_at:{seconds:1747180825 nanos:184033488}" May 14 00:00:25.593355 sudo[2417]: pam_unix(sudo:session): session closed for user root May 14 00:00:25.693816 sshd[2416]: Connection closed by 10.200.16.10 port 40048 May 14 00:00:25.694283 sshd-session[2320]: pam_unix(sshd:session): session closed for user core May 14 00:00:25.699765 systemd[1]: sshd@6-10.200.8.4:22-10.200.16.10:40048.service: Deactivated successfully. May 14 00:00:25.702613 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:00:25.702911 systemd[1]: session-9.scope: Consumed 4.558s CPU time, 265.2M memory peak. May 14 00:00:25.704528 systemd-logind[1718]: Session 9 logged out. Waiting for processes to exit. May 14 00:00:25.705904 systemd-logind[1718]: Removed session 9. May 14 00:02:08.164358 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.16.10:54256.service - OpenSSH per-connection server daemon (10.200.16.10:54256). May 14 00:02:08.795578 sshd[4840]: Accepted publickey for core from 10.200.16.10 port 54256 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:08.797079 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:08.801655 systemd-logind[1718]: New session 10 of user core. May 14 00:02:08.805679 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:02:09.307522 sshd[4842]: Connection closed by 10.200.16.10 port 54256 May 14 00:02:09.308466 sshd-session[4840]: pam_unix(sshd:session): session closed for user core May 14 00:02:09.311560 systemd[1]: sshd@7-10.200.8.4:22-10.200.16.10:54256.service: Deactivated successfully. May 14 00:02:09.313687 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:02:09.315177 systemd-logind[1718]: Session 10 logged out. Waiting for processes to exit. May 14 00:02:09.316308 systemd-logind[1718]: Removed session 10. May 14 00:02:14.421018 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.16.10:41068.service - OpenSSH per-connection server daemon (10.200.16.10:41068). May 14 00:02:15.051784 sshd[4855]: Accepted publickey for core from 10.200.16.10 port 41068 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:15.053418 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:15.059371 systemd-logind[1718]: New session 11 of user core. May 14 00:02:15.062665 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:02:15.557460 sshd[4857]: Connection closed by 10.200.16.10 port 41068 May 14 00:02:15.558264 sshd-session[4855]: pam_unix(sshd:session): session closed for user core May 14 00:02:15.561225 systemd[1]: sshd@8-10.200.8.4:22-10.200.16.10:41068.service: Deactivated successfully. May 14 00:02:15.563623 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:02:15.565585 systemd-logind[1718]: Session 11 logged out. Waiting for processes to exit. May 14 00:02:15.566864 systemd-logind[1718]: Removed session 11. May 14 00:02:20.669608 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.16.10:37210.service - OpenSSH per-connection server daemon (10.200.16.10:37210). May 14 00:02:21.298788 sshd[4870]: Accepted publickey for core from 10.200.16.10 port 37210 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:21.300700 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:21.307311 systemd-logind[1718]: New session 12 of user core. May 14 00:02:21.315661 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:02:21.802383 sshd[4872]: Connection closed by 10.200.16.10 port 37210 May 14 00:02:21.803152 sshd-session[4870]: pam_unix(sshd:session): session closed for user core May 14 00:02:21.806173 systemd[1]: sshd@9-10.200.8.4:22-10.200.16.10:37210.service: Deactivated successfully. May 14 00:02:21.808353 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:02:21.810218 systemd-logind[1718]: Session 12 logged out. Waiting for processes to exit. May 14 00:02:21.811149 systemd-logind[1718]: Removed session 12. May 14 00:02:26.916640 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.16.10:37218.service - OpenSSH per-connection server daemon (10.200.16.10:37218). May 14 00:02:27.554176 sshd[4888]: Accepted publickey for core from 10.200.16.10 port 37218 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:27.555628 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:27.559956 systemd-logind[1718]: New session 13 of user core. May 14 00:02:27.564658 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:02:28.061625 sshd[4890]: Connection closed by 10.200.16.10 port 37218 May 14 00:02:28.062497 sshd-session[4888]: pam_unix(sshd:session): session closed for user core May 14 00:02:28.065826 systemd[1]: sshd@10-10.200.8.4:22-10.200.16.10:37218.service: Deactivated successfully. May 14 00:02:28.068095 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:02:28.069837 systemd-logind[1718]: Session 13 logged out. Waiting for processes to exit. May 14 00:02:28.071018 systemd-logind[1718]: Removed session 13. May 14 00:02:33.178046 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.16.10:52566.service - OpenSSH per-connection server daemon (10.200.16.10:52566). May 14 00:02:33.814774 sshd[4902]: Accepted publickey for core from 10.200.16.10 port 52566 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:33.816767 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:33.822135 systemd-logind[1718]: New session 14 of user core. May 14 00:02:33.827746 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:02:34.337248 sshd[4904]: Connection closed by 10.200.16.10 port 52566 May 14 00:02:34.338016 sshd-session[4902]: pam_unix(sshd:session): session closed for user core May 14 00:02:34.342120 systemd[1]: sshd@11-10.200.8.4:22-10.200.16.10:52566.service: Deactivated successfully. May 14 00:02:34.344427 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:02:34.345299 systemd-logind[1718]: Session 14 logged out. Waiting for processes to exit. May 14 00:02:34.346717 systemd-logind[1718]: Removed session 14. May 14 00:02:39.451797 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.16.10:56670.service - OpenSSH per-connection server daemon (10.200.16.10:56670). May 14 00:02:40.081612 sshd[4917]: Accepted publickey for core from 10.200.16.10 port 56670 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:40.083087 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:40.087414 systemd-logind[1718]: New session 15 of user core. May 14 00:02:40.097783 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:02:40.581577 sshd[4919]: Connection closed by 10.200.16.10 port 56670 May 14 00:02:40.582478 sshd-session[4917]: pam_unix(sshd:session): session closed for user core May 14 00:02:40.585800 systemd[1]: sshd@12-10.200.8.4:22-10.200.16.10:56670.service: Deactivated successfully. May 14 00:02:40.588652 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:02:40.590288 systemd-logind[1718]: Session 15 logged out. Waiting for processes to exit. May 14 00:02:40.591463 systemd-logind[1718]: Removed session 15. May 14 00:02:45.693692 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.16.10:56672.service - OpenSSH per-connection server daemon (10.200.16.10:56672). May 14 00:02:46.330376 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 56672 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:46.332023 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:46.338121 systemd-logind[1718]: New session 16 of user core. May 14 00:02:46.344746 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:02:46.830223 sshd[4933]: Connection closed by 10.200.16.10 port 56672 May 14 00:02:46.831075 sshd-session[4931]: pam_unix(sshd:session): session closed for user core May 14 00:02:46.835172 systemd[1]: sshd@13-10.200.8.4:22-10.200.16.10:56672.service: Deactivated successfully. May 14 00:02:46.837286 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:02:46.838310 systemd-logind[1718]: Session 16 logged out. Waiting for processes to exit. May 14 00:02:46.839400 systemd-logind[1718]: Removed session 16. May 14 00:02:46.954028 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.16.10:56688.service - OpenSSH per-connection server daemon (10.200.16.10:56688). May 14 00:02:47.583884 sshd[4946]: Accepted publickey for core from 10.200.16.10 port 56688 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:47.585287 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:47.589599 systemd-logind[1718]: New session 17 of user core. May 14 00:02:47.598681 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:02:48.123929 sshd[4948]: Connection closed by 10.200.16.10 port 56688 May 14 00:02:48.124749 sshd-session[4946]: pam_unix(sshd:session): session closed for user core May 14 00:02:48.129096 systemd[1]: sshd@14-10.200.8.4:22-10.200.16.10:56688.service: Deactivated successfully. May 14 00:02:48.131376 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:02:48.132273 systemd-logind[1718]: Session 17 logged out. Waiting for processes to exit. May 14 00:02:48.133696 systemd-logind[1718]: Removed session 17. May 14 00:02:48.392318 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.16.10:56698.service - OpenSSH per-connection server daemon (10.200.16.10:56698). May 14 00:02:49.026015 sshd[4958]: Accepted publickey for core from 10.200.16.10 port 56698 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:49.027756 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:49.032117 systemd-logind[1718]: New session 18 of user core. May 14 00:02:49.040679 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:02:49.528377 sshd[4960]: Connection closed by 10.200.16.10 port 56698 May 14 00:02:49.529251 sshd-session[4958]: pam_unix(sshd:session): session closed for user core May 14 00:02:49.533567 systemd[1]: sshd@15-10.200.8.4:22-10.200.16.10:56698.service: Deactivated successfully. May 14 00:02:49.535889 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:02:49.536766 systemd-logind[1718]: Session 18 logged out. Waiting for processes to exit. May 14 00:02:49.537826 systemd-logind[1718]: Removed session 18. May 14 00:02:54.642819 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.16.10:50764.service - OpenSSH per-connection server daemon (10.200.16.10:50764). May 14 00:02:55.274101 sshd[4974]: Accepted publickey for core from 10.200.16.10 port 50764 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:55.275722 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:55.280584 systemd-logind[1718]: New session 19 of user core. May 14 00:02:55.289670 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:02:55.776394 sshd[4976]: Connection closed by 10.200.16.10 port 50764 May 14 00:02:55.777197 sshd-session[4974]: pam_unix(sshd:session): session closed for user core May 14 00:02:55.781414 systemd[1]: sshd@16-10.200.8.4:22-10.200.16.10:50764.service: Deactivated successfully. May 14 00:02:55.783671 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:02:55.784477 systemd-logind[1718]: Session 19 logged out. Waiting for processes to exit. May 14 00:02:55.785489 systemd-logind[1718]: Removed session 19. May 14 00:02:55.892855 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.16.10:50780.service - OpenSSH per-connection server daemon (10.200.16.10:50780). May 14 00:02:56.526300 sshd[4989]: Accepted publickey for core from 10.200.16.10 port 50780 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:56.527777 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:56.532160 systemd-logind[1718]: New session 20 of user core. May 14 00:02:56.542656 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:02:57.083024 sshd[4991]: Connection closed by 10.200.16.10 port 50780 May 14 00:02:57.083988 sshd-session[4989]: pam_unix(sshd:session): session closed for user core May 14 00:02:57.087493 systemd[1]: sshd@17-10.200.8.4:22-10.200.16.10:50780.service: Deactivated successfully. May 14 00:02:57.090268 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:02:57.092112 systemd-logind[1718]: Session 20 logged out. Waiting for processes to exit. May 14 00:02:57.093251 systemd-logind[1718]: Removed session 20. May 14 00:02:57.199465 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.16.10:50792.service - OpenSSH per-connection server daemon (10.200.16.10:50792). May 14 00:02:57.831974 sshd[5000]: Accepted publickey for core from 10.200.16.10 port 50792 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:57.833408 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:57.838697 systemd-logind[1718]: New session 21 of user core. May 14 00:02:57.843664 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:02:59.255725 sshd[5002]: Connection closed by 10.200.16.10 port 50792 May 14 00:02:59.256693 sshd-session[5000]: pam_unix(sshd:session): session closed for user core May 14 00:02:59.261928 systemd-logind[1718]: Session 21 logged out. Waiting for processes to exit. May 14 00:02:59.262744 systemd[1]: sshd@18-10.200.8.4:22-10.200.16.10:50792.service: Deactivated successfully. May 14 00:02:59.266064 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:02:59.267990 systemd-logind[1718]: Removed session 21. May 14 00:02:59.367939 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.16.10:36178.service - OpenSSH per-connection server daemon (10.200.16.10:36178). May 14 00:03:00.004849 sshd[5019]: Accepted publickey for core from 10.200.16.10 port 36178 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:00.006611 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:00.011739 systemd-logind[1718]: New session 22 of user core. May 14 00:03:00.023739 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:03:00.616105 sshd[5021]: Connection closed by 10.200.16.10 port 36178 May 14 00:03:00.616981 sshd-session[5019]: pam_unix(sshd:session): session closed for user core May 14 00:03:00.620387 systemd[1]: sshd@19-10.200.8.4:22-10.200.16.10:36178.service: Deactivated successfully. May 14 00:03:00.623021 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:03:00.624562 systemd-logind[1718]: Session 22 logged out. Waiting for processes to exit. May 14 00:03:00.625759 systemd-logind[1718]: Removed session 22. May 14 00:03:00.728768 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.16.10:36190.service - OpenSSH per-connection server daemon (10.200.16.10:36190). May 14 00:03:01.362844 sshd[5031]: Accepted publickey for core from 10.200.16.10 port 36190 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:01.364316 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:01.368669 systemd-logind[1718]: New session 23 of user core. May 14 00:03:01.380708 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:03:01.865160 sshd[5033]: Connection closed by 10.200.16.10 port 36190 May 14 00:03:01.865940 sshd-session[5031]: pam_unix(sshd:session): session closed for user core May 14 00:03:01.868897 systemd[1]: sshd@20-10.200.8.4:22-10.200.16.10:36190.service: Deactivated successfully. May 14 00:03:01.871183 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:03:01.873119 systemd-logind[1718]: Session 23 logged out. Waiting for processes to exit. May 14 00:03:01.874294 systemd-logind[1718]: Removed session 23. May 14 00:03:06.978108 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.16.10:36206.service - OpenSSH per-connection server daemon (10.200.16.10:36206). May 14 00:03:07.613548 sshd[5048]: Accepted publickey for core from 10.200.16.10 port 36206 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:07.615001 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:07.619683 systemd-logind[1718]: New session 24 of user core. May 14 00:03:07.625657 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:03:08.113318 sshd[5050]: Connection closed by 10.200.16.10 port 36206 May 14 00:03:08.114198 sshd-session[5048]: pam_unix(sshd:session): session closed for user core May 14 00:03:08.118774 systemd[1]: sshd@21-10.200.8.4:22-10.200.16.10:36206.service: Deactivated successfully. May 14 00:03:08.121249 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:03:08.122138 systemd-logind[1718]: Session 24 logged out. Waiting for processes to exit. May 14 00:03:08.123302 systemd-logind[1718]: Removed session 24. May 14 00:03:13.226681 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.16.10:41598.service - OpenSSH per-connection server daemon (10.200.16.10:41598). May 14 00:03:13.856967 sshd[5062]: Accepted publickey for core from 10.200.16.10 port 41598 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:13.858494 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:13.867526 systemd-logind[1718]: New session 25 of user core. May 14 00:03:13.870675 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:03:14.357326 sshd[5067]: Connection closed by 10.200.16.10 port 41598 May 14 00:03:14.357861 sshd-session[5062]: pam_unix(sshd:session): session closed for user core May 14 00:03:14.361028 systemd[1]: sshd@22-10.200.8.4:22-10.200.16.10:41598.service: Deactivated successfully. May 14 00:03:14.363299 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:03:14.364834 systemd-logind[1718]: Session 25 logged out. Waiting for processes to exit. May 14 00:03:14.366202 systemd-logind[1718]: Removed session 25. May 14 00:03:19.470253 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.16.10:33958.service - OpenSSH per-connection server daemon (10.200.16.10:33958). May 14 00:03:20.103963 sshd[5079]: Accepted publickey for core from 10.200.16.10 port 33958 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:20.104610 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:20.109213 systemd-logind[1718]: New session 26 of user core. May 14 00:03:20.113723 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:03:20.605865 sshd[5081]: Connection closed by 10.200.16.10 port 33958 May 14 00:03:20.606668 sshd-session[5079]: pam_unix(sshd:session): session closed for user core May 14 00:03:20.610673 systemd[1]: sshd@23-10.200.8.4:22-10.200.16.10:33958.service: Deactivated successfully. May 14 00:03:20.612921 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:03:20.613906 systemd-logind[1718]: Session 26 logged out. Waiting for processes to exit. May 14 00:03:20.614913 systemd-logind[1718]: Removed session 26. May 14 00:03:20.722445 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.16.10:33972.service - OpenSSH per-connection server daemon (10.200.16.10:33972). May 14 00:03:21.356028 sshd[5092]: Accepted publickey for core from 10.200.16.10 port 33972 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:21.357533 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:21.361974 systemd-logind[1718]: New session 27 of user core. May 14 00:03:21.373796 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:03:23.040884 containerd[1734]: time="2025-05-14T00:03:23.040364827Z" level=info msg="StopContainer for \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" with timeout 30 (s)" May 14 00:03:23.041371 containerd[1734]: time="2025-05-14T00:03:23.041281337Z" level=info msg="Stop container \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" with signal terminated" May 14 00:03:23.059703 systemd[1]: cri-containerd-d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9.scope: Deactivated successfully. May 14 00:03:23.064483 containerd[1734]: time="2025-05-14T00:03:23.063154989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" id:\"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" pid:4071 exited_at:{seconds:1747181003 nanos:61995076}" May 14 00:03:23.064483 containerd[1734]: time="2025-05-14T00:03:23.063244190Z" level=info msg="received exit event container_id:\"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" id:\"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" pid:4071 exited_at:{seconds:1747181003 nanos:61995076}" May 14 00:03:23.079703 containerd[1734]: time="2025-05-14T00:03:23.079571379Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:03:23.085863 containerd[1734]: time="2025-05-14T00:03:23.085484147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"5fefc3f8c78f6f721b959b70f1c0f6321efaa96e34f05a870c391f306bac02d4\" pid:5120 exited_at:{seconds:1747181003 nanos:85107843}" May 14 00:03:23.088000 containerd[1734]: time="2025-05-14T00:03:23.087847874Z" level=info msg="StopContainer for \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" with timeout 2 (s)" May 14 00:03:23.089139 containerd[1734]: time="2025-05-14T00:03:23.088876286Z" level=info msg="Stop container \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" with signal terminated" May 14 00:03:23.101393 systemd-networkd[1338]: lxc_health: Link DOWN May 14 00:03:23.101407 systemd-networkd[1338]: lxc_health: Lost carrier May 14 00:03:23.104324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9-rootfs.mount: Deactivated successfully. May 14 00:03:23.121499 systemd[1]: cri-containerd-4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910.scope: Deactivated successfully. May 14 00:03:23.121659 containerd[1734]: time="2025-05-14T00:03:23.121553863Z" level=info msg="received exit event container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" pid:3932 exited_at:{seconds:1747181003 nanos:121238859}" May 14 00:03:23.122093 systemd[1]: cri-containerd-4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910.scope: Consumed 7.995s CPU time, 136.5M memory peak, 144K read from disk, 13.3M written to disk. May 14 00:03:23.122970 containerd[1734]: time="2025-05-14T00:03:23.122122770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" id:\"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" pid:3932 exited_at:{seconds:1747181003 nanos:121238859}" May 14 00:03:23.147791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910-rootfs.mount: Deactivated successfully. May 14 00:03:23.210260 containerd[1734]: time="2025-05-14T00:03:23.210216186Z" level=info msg="StopContainer for \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" returns successfully" May 14 00:03:23.212523 containerd[1734]: time="2025-05-14T00:03:23.211009995Z" level=info msg="StopPodSandbox for \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\"" May 14 00:03:23.212523 containerd[1734]: time="2025-05-14T00:03:23.211082796Z" level=info msg="Container to stop \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:23.212523 containerd[1734]: time="2025-05-14T00:03:23.211098196Z" level=info msg="Container to stop \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:23.212523 containerd[1734]: time="2025-05-14T00:03:23.211109596Z" level=info msg="Container to stop \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:23.212523 containerd[1734]: time="2025-05-14T00:03:23.211120896Z" level=info msg="Container to stop \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:23.212523 containerd[1734]: time="2025-05-14T00:03:23.211132296Z" level=info msg="Container to stop \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:23.212523 containerd[1734]: time="2025-05-14T00:03:23.212402511Z" level=info msg="StopContainer for \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" returns successfully" May 14 00:03:23.213099 containerd[1734]: time="2025-05-14T00:03:23.213070619Z" level=info msg="StopPodSandbox for \"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\"" May 14 00:03:23.213194 containerd[1734]: time="2025-05-14T00:03:23.213133619Z" level=info msg="Container to stop \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:23.222944 systemd[1]: cri-containerd-bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736.scope: Deactivated successfully. May 14 00:03:23.224497 systemd[1]: cri-containerd-eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a.scope: Deactivated successfully. May 14 00:03:23.227063 containerd[1734]: time="2025-05-14T00:03:23.226453373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\" id:\"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\" pid:3693 exit_status:137 exited_at:{seconds:1747181003 nanos:225401061}" May 14 00:03:23.258899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a-rootfs.mount: Deactivated successfully. May 14 00:03:23.272449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736-rootfs.mount: Deactivated successfully. May 14 00:03:23.286530 containerd[1734]: time="2025-05-14T00:03:23.284337441Z" level=info msg="received exit event sandbox_id:\"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\" exit_status:137 exited_at:{seconds:1747181003 nanos:225401061}" May 14 00:03:23.286530 containerd[1734]: time="2025-05-14T00:03:23.285659456Z" level=info msg="shim disconnected" id=bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736 namespace=k8s.io May 14 00:03:23.286530 containerd[1734]: time="2025-05-14T00:03:23.285686956Z" level=warning msg="cleaning up after shim disconnected" id=bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736 namespace=k8s.io May 14 00:03:23.286530 containerd[1734]: time="2025-05-14T00:03:23.285697256Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:23.287361 containerd[1734]: time="2025-05-14T00:03:23.287309675Z" level=info msg="shim disconnected" id=eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a namespace=k8s.io May 14 00:03:23.287361 containerd[1734]: time="2025-05-14T00:03:23.287358975Z" level=warning msg="cleaning up after shim disconnected" id=eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a namespace=k8s.io May 14 00:03:23.288060 containerd[1734]: time="2025-05-14T00:03:23.287368576Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:23.289372 containerd[1734]: time="2025-05-14T00:03:23.288927294Z" level=info msg="TearDown network for sandbox \"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\" successfully" May 14 00:03:23.289488 containerd[1734]: time="2025-05-14T00:03:23.289468800Z" level=info msg="StopPodSandbox for \"bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736\" returns successfully" May 14 00:03:23.289699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbfa528554833433d8e807e05461b4dfe39e649c6b8853894f768b7cd9c9d736-shm.mount: Deactivated successfully. May 14 00:03:23.304002 kubelet[3361]: I0514 00:03:23.303392 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-cilium-config-path\") pod \"dd77439c-55b7-4ed7-afe7-4df2fd32f8c0\" (UID: \"dd77439c-55b7-4ed7-afe7-4df2fd32f8c0\") " May 14 00:03:23.304002 kubelet[3361]: I0514 00:03:23.303440 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4qlw\" (UniqueName: \"kubernetes.io/projected/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-kube-api-access-c4qlw\") pod \"dd77439c-55b7-4ed7-afe7-4df2fd32f8c0\" (UID: \"dd77439c-55b7-4ed7-afe7-4df2fd32f8c0\") " May 14 00:03:23.316777 containerd[1734]: time="2025-05-14T00:03:23.314024183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" id:\"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" pid:3667 exit_status:137 exited_at:{seconds:1747181003 nanos:230989525}" May 14 00:03:23.316777 containerd[1734]: time="2025-05-14T00:03:23.314443088Z" level=info msg="received exit event sandbox_id:\"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" exit_status:137 exited_at:{seconds:1747181003 nanos:230989525}" May 14 00:03:23.317891 containerd[1734]: time="2025-05-14T00:03:23.317857927Z" level=info msg="TearDown network for sandbox \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" successfully" May 14 00:03:23.317891 containerd[1734]: time="2025-05-14T00:03:23.317889528Z" level=info msg="StopPodSandbox for \"eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a\" returns successfully" May 14 00:03:23.318045 kubelet[3361]: I0514 00:03:23.317769 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd77439c-55b7-4ed7-afe7-4df2fd32f8c0" (UID: "dd77439c-55b7-4ed7-afe7-4df2fd32f8c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 00:03:23.318488 kubelet[3361]: I0514 00:03:23.318449 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-kube-api-access-c4qlw" (OuterVolumeSpecName: "kube-api-access-c4qlw") pod "dd77439c-55b7-4ed7-afe7-4df2fd32f8c0" (UID: "dd77439c-55b7-4ed7-afe7-4df2fd32f8c0"). InnerVolumeSpecName "kube-api-access-c4qlw". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:03:23.405055 kubelet[3361]: I0514 00:03:23.404520 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef58f594-6563-4ba4-8d64-e1d9d3132abe-clustermesh-secrets\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405055 kubelet[3361]: I0514 00:03:23.404577 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hubble-tls\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405055 kubelet[3361]: I0514 00:03:23.404618 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-config-path\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405055 kubelet[3361]: I0514 00:03:23.404638 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hostproc\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405055 kubelet[3361]: I0514 00:03:23.404660 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-kernel\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405055 kubelet[3361]: I0514 00:03:23.404688 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwh5r\" (UniqueName: \"kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-kube-api-access-vwh5r\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405823 kubelet[3361]: I0514 00:03:23.404710 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-etc-cni-netd\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405823 kubelet[3361]: I0514 00:03:23.404729 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-bpf-maps\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405823 kubelet[3361]: I0514 00:03:23.404750 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cni-path\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405823 kubelet[3361]: I0514 00:03:23.404775 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-net\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405823 kubelet[3361]: I0514 00:03:23.404794 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-lib-modules\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.405823 kubelet[3361]: I0514 00:03:23.404812 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-run\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.406056 kubelet[3361]: I0514 00:03:23.404830 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-cgroup\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.406056 kubelet[3361]: I0514 00:03:23.404854 3361 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-xtables-lock\") pod \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\" (UID: \"ef58f594-6563-4ba4-8d64-e1d9d3132abe\") " May 14 00:03:23.406056 kubelet[3361]: I0514 00:03:23.404903 3361 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-cilium-config-path\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.406056 kubelet[3361]: I0514 00:03:23.404919 3361 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c4qlw\" (UniqueName: \"kubernetes.io/projected/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0-kube-api-access-c4qlw\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.406056 kubelet[3361]: I0514 00:03:23.404954 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409308 kubelet[3361]: I0514 00:03:23.408873 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:03:23.409308 kubelet[3361]: I0514 00:03:23.408927 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hostproc" (OuterVolumeSpecName: "hostproc") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409308 kubelet[3361]: I0514 00:03:23.408950 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409308 kubelet[3361]: I0514 00:03:23.408978 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409308 kubelet[3361]: I0514 00:03:23.409007 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409617 kubelet[3361]: I0514 00:03:23.409035 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409617 kubelet[3361]: I0514 00:03:23.409056 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cni-path" (OuterVolumeSpecName: "cni-path") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409617 kubelet[3361]: I0514 00:03:23.409077 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409617 kubelet[3361]: I0514 00:03:23.409095 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409617 kubelet[3361]: I0514 00:03:23.409119 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:03:23.409951 kubelet[3361]: I0514 00:03:23.409869 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef58f594-6563-4ba4-8d64-e1d9d3132abe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 00:03:23.412265 kubelet[3361]: I0514 00:03:23.412209 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-kube-api-access-vwh5r" (OuterVolumeSpecName: "kube-api-access-vwh5r") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "kube-api-access-vwh5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:03:23.412434 kubelet[3361]: I0514 00:03:23.412407 3361 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef58f594-6563-4ba4-8d64-e1d9d3132abe" (UID: "ef58f594-6563-4ba4-8d64-e1d9d3132abe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 00:03:23.505141 kubelet[3361]: I0514 00:03:23.505094 3361 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-xtables-lock\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505141 kubelet[3361]: I0514 00:03:23.505137 3361 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-config-path\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505141 kubelet[3361]: I0514 00:03:23.505151 3361 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hostproc\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505163 3361 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-kernel\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505180 3361 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef58f594-6563-4ba4-8d64-e1d9d3132abe-clustermesh-secrets\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505191 3361 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-hubble-tls\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505201 3361 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwh5r\" (UniqueName: \"kubernetes.io/projected/ef58f594-6563-4ba4-8d64-e1d9d3132abe-kube-api-access-vwh5r\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505211 3361 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-etc-cni-netd\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505222 3361 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-bpf-maps\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505232 3361 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cni-path\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505387 kubelet[3361]: I0514 00:03:23.505242 3361 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-host-proc-sys-net\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505610 kubelet[3361]: I0514 00:03:23.505252 3361 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-lib-modules\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505610 kubelet[3361]: I0514 00:03:23.505265 3361 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-run\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:23.505610 kubelet[3361]: I0514 00:03:23.505277 3361 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef58f594-6563-4ba4-8d64-e1d9d3132abe-cilium-cgroup\") on node \"ci-4284.0.0-n-1d9e750aa6\" DevicePath \"\"" May 14 00:03:24.039549 kubelet[3361]: I0514 00:03:24.039342 3361 scope.go:117] "RemoveContainer" containerID="d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9" May 14 00:03:24.045566 containerd[1734]: time="2025-05-14T00:03:24.044820412Z" level=info msg="RemoveContainer for \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\"" May 14 00:03:24.048109 systemd[1]: Removed slice kubepods-besteffort-poddd77439c_55b7_4ed7_afe7_4df2fd32f8c0.slice - libcontainer container kubepods-besteffort-poddd77439c_55b7_4ed7_afe7_4df2fd32f8c0.slice. May 14 00:03:24.055010 systemd[1]: Removed slice kubepods-burstable-podef58f594_6563_4ba4_8d64_e1d9d3132abe.slice - libcontainer container kubepods-burstable-podef58f594_6563_4ba4_8d64_e1d9d3132abe.slice. May 14 00:03:24.055289 systemd[1]: kubepods-burstable-podef58f594_6563_4ba4_8d64_e1d9d3132abe.slice: Consumed 8.087s CPU time, 136.9M memory peak, 144K read from disk, 13.3M written to disk. May 14 00:03:24.057875 containerd[1734]: time="2025-05-14T00:03:24.057579159Z" level=info msg="RemoveContainer for \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" returns successfully" May 14 00:03:24.058892 kubelet[3361]: I0514 00:03:24.058757 3361 scope.go:117] "RemoveContainer" containerID="d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9" May 14 00:03:24.059250 containerd[1734]: time="2025-05-14T00:03:24.059209978Z" level=error msg="ContainerStatus for \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\": not found" May 14 00:03:24.060256 kubelet[3361]: E0514 00:03:24.059346 3361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\": not found" containerID="d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9" May 14 00:03:24.060256 kubelet[3361]: I0514 00:03:24.059373 3361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9"} err="failed to get container status \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d58f3b037561ba23ff86604b0251a962dee9a2cd58a792da798f62a5c4da67e9\": not found" May 14 00:03:24.060256 kubelet[3361]: I0514 00:03:24.059434 3361 scope.go:117] "RemoveContainer" containerID="4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910" May 14 00:03:24.062352 containerd[1734]: time="2025-05-14T00:03:24.062215413Z" level=info msg="RemoveContainer for \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\"" May 14 00:03:24.076066 containerd[1734]: time="2025-05-14T00:03:24.075886470Z" level=info msg="RemoveContainer for \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" returns successfully" May 14 00:03:24.076231 kubelet[3361]: I0514 00:03:24.076132 3361 scope.go:117] "RemoveContainer" containerID="daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b" May 14 00:03:24.079775 containerd[1734]: time="2025-05-14T00:03:24.078617702Z" level=info msg="RemoveContainer for \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\"" May 14 00:03:24.090626 containerd[1734]: time="2025-05-14T00:03:24.090565140Z" level=info msg="RemoveContainer for \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" returns successfully" May 14 00:03:24.091006 kubelet[3361]: I0514 00:03:24.090976 3361 scope.go:117] "RemoveContainer" containerID="19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d" May 14 00:03:24.093704 containerd[1734]: time="2025-05-14T00:03:24.093064568Z" level=info msg="RemoveContainer for \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\"" May 14 00:03:24.099845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eac17f7f0bb906c1ff40f4369ac3cd2656116ba73ddfbf162a6c048cbd97bb3a-shm.mount: Deactivated successfully. May 14 00:03:24.100321 systemd[1]: var-lib-kubelet-pods-ef58f594\x2d6563\x2d4ba4\x2d8d64\x2de1d9d3132abe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:03:24.100421 systemd[1]: var-lib-kubelet-pods-dd77439c\x2d55b7\x2d4ed7\x2dafe7\x2d4df2fd32f8c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4qlw.mount: Deactivated successfully. May 14 00:03:24.100523 systemd[1]: var-lib-kubelet-pods-ef58f594\x2d6563\x2d4ba4\x2d8d64\x2de1d9d3132abe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvwh5r.mount: Deactivated successfully. May 14 00:03:24.100612 systemd[1]: var-lib-kubelet-pods-ef58f594\x2d6563\x2d4ba4\x2d8d64\x2de1d9d3132abe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:03:24.106007 containerd[1734]: time="2025-05-14T00:03:24.105971517Z" level=info msg="RemoveContainer for \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" returns successfully" May 14 00:03:24.106255 kubelet[3361]: I0514 00:03:24.106190 3361 scope.go:117] "RemoveContainer" containerID="e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055" May 14 00:03:24.108109 containerd[1734]: time="2025-05-14T00:03:24.107647037Z" level=info msg="RemoveContainer for \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\"" May 14 00:03:24.117563 containerd[1734]: time="2025-05-14T00:03:24.117530251Z" level=info msg="RemoveContainer for \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" returns successfully" May 14 00:03:24.117755 kubelet[3361]: I0514 00:03:24.117703 3361 scope.go:117] "RemoveContainer" containerID="e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75" May 14 00:03:24.119097 containerd[1734]: time="2025-05-14T00:03:24.119066968Z" level=info msg="RemoveContainer for \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\"" May 14 00:03:24.127270 containerd[1734]: time="2025-05-14T00:03:24.127241363Z" level=info msg="RemoveContainer for \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" returns successfully" May 14 00:03:24.127530 kubelet[3361]: I0514 00:03:24.127462 3361 scope.go:117] "RemoveContainer" containerID="4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910" May 14 00:03:24.127771 containerd[1734]: time="2025-05-14T00:03:24.127732168Z" level=error msg="ContainerStatus for \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\": not found" May 14 00:03:24.127889 kubelet[3361]: E0514 00:03:24.127865 3361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\": not found" containerID="4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910" May 14 00:03:24.127978 kubelet[3361]: I0514 00:03:24.127901 3361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910"} err="failed to get container status \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\": rpc error: code = NotFound desc = an error occurred when try to find container \"4789a9075faa9d782fe1549b075bae89cae912cb7d6372e52096f06a552f5910\": not found" May 14 00:03:24.127978 kubelet[3361]: I0514 00:03:24.127932 3361 scope.go:117] "RemoveContainer" containerID="daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b" May 14 00:03:24.128188 containerd[1734]: time="2025-05-14T00:03:24.128109073Z" level=error msg="ContainerStatus for \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\": not found" May 14 00:03:24.128258 kubelet[3361]: E0514 00:03:24.128214 3361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\": not found" containerID="daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b" May 14 00:03:24.128258 kubelet[3361]: I0514 00:03:24.128241 3361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b"} err="failed to get container status \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"daf5841e1836e6cc8b2acfc213cc637c287a6456c633e819755664e660130b3b\": not found" May 14 00:03:24.128353 kubelet[3361]: I0514 00:03:24.128264 3361 scope.go:117] "RemoveContainer" containerID="19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d" May 14 00:03:24.128545 containerd[1734]: time="2025-05-14T00:03:24.128441876Z" level=error msg="ContainerStatus for \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\": not found" May 14 00:03:24.128620 kubelet[3361]: E0514 00:03:24.128561 3361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\": not found" containerID="19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d" May 14 00:03:24.128620 kubelet[3361]: I0514 00:03:24.128590 3361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d"} err="failed to get container status \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\": rpc error: code = NotFound desc = an error occurred when try to find container \"19db0d71052b9c5bac23a11b4e201024b8a29f3b1aed784bd9c273509f23a69d\": not found" May 14 00:03:24.128620 kubelet[3361]: I0514 00:03:24.128612 3361 scope.go:117] "RemoveContainer" containerID="e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055" May 14 00:03:24.128906 containerd[1734]: time="2025-05-14T00:03:24.128870981Z" level=error msg="ContainerStatus for \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\": not found" May 14 00:03:24.129039 kubelet[3361]: E0514 00:03:24.129017 3361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\": not found" containerID="e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055" May 14 00:03:24.129135 kubelet[3361]: I0514 00:03:24.129043 3361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055"} err="failed to get container status \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\": rpc error: code = NotFound desc = an error occurred when try to find container \"e344e9a24473937ba16b58e8ea0c829e86d5da8cc103f1ccfaa8f6f4626dd055\": not found" May 14 00:03:24.129135 kubelet[3361]: I0514 00:03:24.129064 3361 scope.go:117] "RemoveContainer" containerID="e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75" May 14 00:03:24.129283 containerd[1734]: time="2025-05-14T00:03:24.129244786Z" level=error msg="ContainerStatus for \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\": not found" May 14 00:03:24.129409 kubelet[3361]: E0514 00:03:24.129387 3361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\": not found" containerID="e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75" May 14 00:03:24.129493 kubelet[3361]: I0514 00:03:24.129412 3361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75"} err="failed to get container status \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8adf83e8e3e30e65cbef39fcfa158f5ba93301c853e250c7f8f88ae60962d75\": not found" May 14 00:03:24.502347 kubelet[3361]: I0514 00:03:24.502305 3361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd77439c-55b7-4ed7-afe7-4df2fd32f8c0" path="/var/lib/kubelet/pods/dd77439c-55b7-4ed7-afe7-4df2fd32f8c0/volumes" May 14 00:03:24.502887 kubelet[3361]: I0514 00:03:24.502862 3361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef58f594-6563-4ba4-8d64-e1d9d3132abe" path="/var/lib/kubelet/pods/ef58f594-6563-4ba4-8d64-e1d9d3132abe/volumes" May 14 00:03:25.077447 sshd[5094]: Connection closed by 10.200.16.10 port 33972 May 14 00:03:25.078416 sshd-session[5092]: pam_unix(sshd:session): session closed for user core May 14 00:03:25.081657 systemd[1]: sshd@24-10.200.8.4:22-10.200.16.10:33972.service: Deactivated successfully. May 14 00:03:25.084026 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:03:25.085588 systemd-logind[1718]: Session 27 logged out. Waiting for processes to exit. May 14 00:03:25.087010 systemd-logind[1718]: Removed session 27. May 14 00:03:25.204029 systemd[1]: Started sshd@25-10.200.8.4:22-10.200.16.10:33980.service - OpenSSH per-connection server daemon (10.200.16.10:33980). May 14 00:03:25.840803 sshd[5242]: Accepted publickey for core from 10.200.16.10 port 33980 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:25.842441 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:25.846912 systemd-logind[1718]: New session 28 of user core. May 14 00:03:25.851664 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 00:03:27.038529 kubelet[3361]: I0514 00:03:27.036570 3361 memory_manager.go:355] "RemoveStaleState removing state" podUID="ef58f594-6563-4ba4-8d64-e1d9d3132abe" containerName="cilium-agent" May 14 00:03:27.038529 kubelet[3361]: I0514 00:03:27.036602 3361 memory_manager.go:355] "RemoveStaleState removing state" podUID="dd77439c-55b7-4ed7-afe7-4df2fd32f8c0" containerName="cilium-operator" May 14 00:03:27.049649 systemd[1]: Created slice kubepods-burstable-podf7088f76_fe00_464f_acc8_caffc9acc12d.slice - libcontainer container kubepods-burstable-podf7088f76_fe00_464f_acc8_caffc9acc12d.slice. May 14 00:03:27.112660 sshd[5246]: Connection closed by 10.200.16.10 port 33980 May 14 00:03:27.113769 sshd-session[5242]: pam_unix(sshd:session): session closed for user core May 14 00:03:27.122015 systemd[1]: sshd@25-10.200.8.4:22-10.200.16.10:33980.service: Deactivated successfully. May 14 00:03:27.127201 kubelet[3361]: I0514 00:03:27.125899 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-bpf-maps\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127201 kubelet[3361]: I0514 00:03:27.125941 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-hostproc\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127201 kubelet[3361]: I0514 00:03:27.125970 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvmt\" (UniqueName: \"kubernetes.io/projected/f7088f76-fe00-464f-acc8-caffc9acc12d-kube-api-access-7cvmt\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127201 kubelet[3361]: I0514 00:03:27.125996 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-etc-cni-netd\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127201 kubelet[3361]: I0514 00:03:27.126019 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7088f76-fe00-464f-acc8-caffc9acc12d-cilium-ipsec-secrets\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127201 kubelet[3361]: I0514 00:03:27.126039 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-host-proc-sys-net\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127579 kubelet[3361]: I0514 00:03:27.126062 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-cilium-run\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127579 kubelet[3361]: I0514 00:03:27.126081 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-xtables-lock\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127579 kubelet[3361]: I0514 00:03:27.126103 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-lib-modules\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127579 kubelet[3361]: I0514 00:03:27.126126 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7088f76-fe00-464f-acc8-caffc9acc12d-clustermesh-secrets\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127579 kubelet[3361]: I0514 00:03:27.126147 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7088f76-fe00-464f-acc8-caffc9acc12d-cilium-config-path\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127579 kubelet[3361]: I0514 00:03:27.126166 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7088f76-fe00-464f-acc8-caffc9acc12d-hubble-tls\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127839 kubelet[3361]: I0514 00:03:27.126187 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-cilium-cgroup\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127839 kubelet[3361]: I0514 00:03:27.126213 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-host-proc-sys-kernel\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.127839 kubelet[3361]: I0514 00:03:27.126242 3361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7088f76-fe00-464f-acc8-caffc9acc12d-cni-path\") pod \"cilium-9q828\" (UID: \"f7088f76-fe00-464f-acc8-caffc9acc12d\") " pod="kube-system/cilium-9q828" May 14 00:03:27.128540 systemd[1]: session-28.scope: Deactivated successfully. May 14 00:03:27.130307 systemd-logind[1718]: Session 28 logged out. Waiting for processes to exit. May 14 00:03:27.134205 systemd-logind[1718]: Removed session 28. May 14 00:03:27.224917 systemd[1]: Started sshd@26-10.200.8.4:22-10.200.16.10:33994.service - OpenSSH per-connection server daemon (10.200.16.10:33994). May 14 00:03:27.355623 containerd[1734]: time="2025-05-14T00:03:27.355455360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9q828,Uid:f7088f76-fe00-464f-acc8-caffc9acc12d,Namespace:kube-system,Attempt:0,}" May 14 00:03:27.403150 containerd[1734]: time="2025-05-14T00:03:27.402694266Z" level=info msg="connecting to shim ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d" address="unix:///run/containerd/s/8127e32e4916815e0df0a3cc3aa5093b570eba1997a195110f78c7b218fc1d0e" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:27.429701 systemd[1]: Started cri-containerd-ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d.scope - libcontainer container ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d. May 14 00:03:27.456783 containerd[1734]: time="2025-05-14T00:03:27.456730473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9q828,Uid:f7088f76-fe00-464f-acc8-caffc9acc12d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\"" May 14 00:03:27.459624 containerd[1734]: time="2025-05-14T00:03:27.459578516Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:03:27.485440 containerd[1734]: time="2025-05-14T00:03:27.485393902Z" level=info msg="Container 395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:27.501753 containerd[1734]: time="2025-05-14T00:03:27.501711045Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a\"" May 14 00:03:27.502421 containerd[1734]: time="2025-05-14T00:03:27.502376455Z" level=info msg="StartContainer for \"395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a\"" May 14 00:03:27.503358 containerd[1734]: time="2025-05-14T00:03:27.503324370Z" level=info msg="connecting to shim 395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a" address="unix:///run/containerd/s/8127e32e4916815e0df0a3cc3aa5093b570eba1997a195110f78c7b218fc1d0e" protocol=ttrpc version=3 May 14 00:03:27.526710 systemd[1]: Started cri-containerd-395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a.scope - libcontainer container 395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a. May 14 00:03:27.565554 containerd[1734]: time="2025-05-14T00:03:27.564826689Z" level=info msg="StartContainer for \"395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a\" returns successfully" May 14 00:03:27.569921 systemd[1]: cri-containerd-395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a.scope: Deactivated successfully. May 14 00:03:27.572458 containerd[1734]: time="2025-05-14T00:03:27.572361401Z" level=info msg="received exit event container_id:\"395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a\" id:\"395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a\" pid:5318 exited_at:{seconds:1747181007 nanos:572049497}" May 14 00:03:27.572458 containerd[1734]: time="2025-05-14T00:03:27.572430002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a\" id:\"395ba5b762d11d36190f4a707084fa96230d4d715ebe31ddcd4273c2bd57383a\" pid:5318 exited_at:{seconds:1747181007 nanos:572049497}" May 14 00:03:27.626183 kubelet[3361]: E0514 00:03:27.625997 3361 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:03:27.886824 sshd[5257]: Accepted publickey for core from 10.200.16.10 port 33994 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:27.888500 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:27.894561 systemd-logind[1718]: New session 29 of user core. May 14 00:03:27.898739 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 00:03:28.065740 containerd[1734]: time="2025-05-14T00:03:28.064415855Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:03:28.087991 containerd[1734]: time="2025-05-14T00:03:28.087935706Z" level=info msg="Container bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:28.105329 containerd[1734]: time="2025-05-14T00:03:28.105281865Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38\"" May 14 00:03:28.106109 containerd[1734]: time="2025-05-14T00:03:28.105848474Z" level=info msg="StartContainer for \"bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38\"" May 14 00:03:28.107731 containerd[1734]: time="2025-05-14T00:03:28.107182494Z" level=info msg="connecting to shim bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38" address="unix:///run/containerd/s/8127e32e4916815e0df0a3cc3aa5093b570eba1997a195110f78c7b218fc1d0e" protocol=ttrpc version=3 May 14 00:03:28.128675 systemd[1]: Started cri-containerd-bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38.scope - libcontainer container bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38. May 14 00:03:28.166835 containerd[1734]: time="2025-05-14T00:03:28.165710269Z" level=info msg="StartContainer for \"bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38\" returns successfully" May 14 00:03:28.167723 systemd[1]: cri-containerd-bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38.scope: Deactivated successfully. May 14 00:03:28.170333 containerd[1734]: time="2025-05-14T00:03:28.170300637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38\" id:\"bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38\" pid:5363 exited_at:{seconds:1747181008 nanos:169456725}" May 14 00:03:28.170533 containerd[1734]: time="2025-05-14T00:03:28.170488740Z" level=info msg="received exit event container_id:\"bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38\" id:\"bb691e22e122706c79379f301fec3901de802584db7cd60aa1ac49fbcbc54c38\" pid:5363 exited_at:{seconds:1747181008 nanos:169456725}" May 14 00:03:28.328577 sshd[5350]: Connection closed by 10.200.16.10 port 33994 May 14 00:03:28.329563 sshd-session[5257]: pam_unix(sshd:session): session closed for user core May 14 00:03:28.332974 systemd[1]: sshd@26-10.200.8.4:22-10.200.16.10:33994.service: Deactivated successfully. May 14 00:03:28.335427 systemd[1]: session-29.scope: Deactivated successfully. May 14 00:03:28.337590 systemd-logind[1718]: Session 29 logged out. Waiting for processes to exit. May 14 00:03:28.338718 systemd-logind[1718]: Removed session 29. May 14 00:03:28.440639 systemd[1]: Started sshd@27-10.200.8.4:22-10.200.16.10:33996.service - OpenSSH per-connection server daemon (10.200.16.10:33996). May 14 00:03:29.070555 containerd[1734]: time="2025-05-14T00:03:29.068977467Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:03:29.074800 sshd[5400]: Accepted publickey for core from 10.200.16.10 port 33996 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:03:29.077561 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:29.086301 systemd-logind[1718]: New session 30 of user core. May 14 00:03:29.091741 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 00:03:29.106691 containerd[1734]: time="2025-05-14T00:03:29.101754757Z" level=info msg="Container 5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:29.111005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238726453.mount: Deactivated successfully. May 14 00:03:29.121578 containerd[1734]: time="2025-05-14T00:03:29.121547053Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5\"" May 14 00:03:29.124536 containerd[1734]: time="2025-05-14T00:03:29.122412366Z" level=info msg="StartContainer for \"5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5\"" May 14 00:03:29.124536 containerd[1734]: time="2025-05-14T00:03:29.123942189Z" level=info msg="connecting to shim 5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5" address="unix:///run/containerd/s/8127e32e4916815e0df0a3cc3aa5093b570eba1997a195110f78c7b218fc1d0e" protocol=ttrpc version=3 May 14 00:03:29.150665 systemd[1]: Started cri-containerd-5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5.scope - libcontainer container 5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5. May 14 00:03:29.189030 systemd[1]: cri-containerd-5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5.scope: Deactivated successfully. May 14 00:03:29.191013 containerd[1734]: time="2025-05-14T00:03:29.190958390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5\" id:\"5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5\" pid:5418 exited_at:{seconds:1747181009 nanos:189759072}" May 14 00:03:29.191299 containerd[1734]: time="2025-05-14T00:03:29.191109192Z" level=info msg="received exit event container_id:\"5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5\" id:\"5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5\" pid:5418 exited_at:{seconds:1747181009 nanos:189759072}" May 14 00:03:29.194350 containerd[1734]: time="2025-05-14T00:03:29.193645430Z" level=info msg="StartContainer for \"5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5\" returns successfully" May 14 00:03:29.216398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a4b999b766d9eb047c7c7c568361737aef1c03e52481940cc769d4e613f09e5-rootfs.mount: Deactivated successfully. May 14 00:03:30.083974 containerd[1734]: time="2025-05-14T00:03:30.083813533Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:03:30.107350 containerd[1734]: time="2025-05-14T00:03:30.107304784Z" level=info msg="Container cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:30.113826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898827246.mount: Deactivated successfully. May 14 00:03:30.125301 containerd[1734]: time="2025-05-14T00:03:30.125254353Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3\"" May 14 00:03:30.126286 containerd[1734]: time="2025-05-14T00:03:30.126249268Z" level=info msg="StartContainer for \"cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3\"" May 14 00:03:30.127563 containerd[1734]: time="2025-05-14T00:03:30.127522387Z" level=info msg="connecting to shim cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3" address="unix:///run/containerd/s/8127e32e4916815e0df0a3cc3aa5093b570eba1997a195110f78c7b218fc1d0e" protocol=ttrpc version=3 May 14 00:03:30.150699 systemd[1]: Started cri-containerd-cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3.scope - libcontainer container cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3. May 14 00:03:30.176020 systemd[1]: cri-containerd-cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3.scope: Deactivated successfully. May 14 00:03:30.177850 containerd[1734]: time="2025-05-14T00:03:30.177809038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3\" id:\"cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3\" pid:5461 exited_at:{seconds:1747181010 nanos:176319716}" May 14 00:03:30.181806 containerd[1734]: time="2025-05-14T00:03:30.181668796Z" level=info msg="received exit event container_id:\"cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3\" id:\"cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3\" pid:5461 exited_at:{seconds:1747181010 nanos:176319716}" May 14 00:03:30.190679 containerd[1734]: time="2025-05-14T00:03:30.190607129Z" level=info msg="StartContainer for \"cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3\" returns successfully" May 14 00:03:30.204273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf18f83ea1c7f04646ec88e445674208a4af3009c6cee6c2c5978f3675a711f3-rootfs.mount: Deactivated successfully. May 14 00:03:31.081348 containerd[1734]: time="2025-05-14T00:03:31.080289910Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:03:31.107681 containerd[1734]: time="2025-05-14T00:03:31.107631018Z" level=info msg="Container e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:31.129009 containerd[1734]: time="2025-05-14T00:03:31.128951236Z" level=info msg="CreateContainer within sandbox \"ce45ef5ef1e0ca32cb5b33c947da1d0494a1383b0bdf798140b5679347ca939d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\"" May 14 00:03:31.129786 containerd[1734]: time="2025-05-14T00:03:31.129735648Z" level=info msg="StartContainer for \"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\"" May 14 00:03:31.131783 containerd[1734]: time="2025-05-14T00:03:31.131743377Z" level=info msg="connecting to shim e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448" address="unix:///run/containerd/s/8127e32e4916815e0df0a3cc3aa5093b570eba1997a195110f78c7b218fc1d0e" protocol=ttrpc version=3 May 14 00:03:31.155757 systemd[1]: Started cri-containerd-e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448.scope - libcontainer container e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448. May 14 00:03:31.197533 containerd[1734]: time="2025-05-14T00:03:31.196934350Z" level=info msg="StartContainer for \"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\" returns successfully" May 14 00:03:31.282188 containerd[1734]: time="2025-05-14T00:03:31.282135522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\" id:\"9ca950d016d15db1eaea8faadc96fc8653fb47ec1480811d0550d4d8ea3ffadb\" pid:5530 exited_at:{seconds:1747181011 nanos:281735016}" May 14 00:03:31.753549 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 14 00:03:32.100947 kubelet[3361]: I0514 00:03:32.100780 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9q828" podStartSLOduration=5.100756239 podStartE2EDuration="5.100756239s" podCreationTimestamp="2025-05-14 00:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:32.100252931 +0000 UTC m=+219.699680812" watchObservedRunningTime="2025-05-14 00:03:32.100756239 +0000 UTC m=+219.700184120" May 14 00:03:33.682483 containerd[1734]: time="2025-05-14T00:03:33.682434243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\" id:\"14865fbca54ed2ea62ba0f9c2882c4fdcdf80bfcdbbf12c58ff04d1f10889576\" pid:5734 exit_status:1 exited_at:{seconds:1747181013 nanos:681969936}" May 14 00:03:34.566767 systemd-networkd[1338]: lxc_health: Link UP May 14 00:03:34.584924 systemd-networkd[1338]: lxc_health: Gained carrier May 14 00:03:35.689712 systemd-networkd[1338]: lxc_health: Gained IPv6LL May 14 00:03:35.900547 containerd[1734]: time="2025-05-14T00:03:35.900178740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\" id:\"5385b44cab370e21244a435b40bb11a6f6ddbb7af3b0ebca82c18de8e45b3f72\" pid:6084 exited_at:{seconds:1747181015 nanos:898458414}" May 14 00:03:38.056370 containerd[1734]: time="2025-05-14T00:03:38.056318317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\" id:\"60d4bfa18c4ffb60a7b656853a3d7448bc55088414f997d86d23e0a9d11681a9\" pid:6111 exited_at:{seconds:1747181018 nanos:55339303}" May 14 00:03:40.162534 containerd[1734]: time="2025-05-14T00:03:40.162473361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\" id:\"30d2a2bd6912a926a0b62aa34ec811040f9a2a05852e5cc17f2c266556f61693\" pid:6154 exited_at:{seconds:1747181020 nanos:161848751}" May 14 00:03:42.266563 containerd[1734]: time="2025-05-14T00:03:42.266414973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e55cdce118fc14b6c80d718845f5030e56315a45ed6fb5aeb5f5896e7cff1448\" id:\"65ef090ba6d9f59de3e5ca9264f358f836d8b446436d9a1c2f1d0c8044637d31\" pid:6176 exited_at:{seconds:1747181022 nanos:265622061}" May 14 00:03:42.372758 sshd[5402]: Connection closed by 10.200.16.10 port 33996 May 14 00:03:42.373721 sshd-session[5400]: pam_unix(sshd:session): session closed for user core May 14 00:03:42.377712 systemd[1]: sshd@27-10.200.8.4:22-10.200.16.10:33996.service: Deactivated successfully. May 14 00:03:42.380077 systemd[1]: session-30.scope: Deactivated successfully. May 14 00:03:42.381581 systemd-logind[1718]: Session 30 logged out. Waiting for processes to exit. May 14 00:03:42.382764 systemd-logind[1718]: Removed session 30.