Mar 25 01:36:45.105475 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:36:45.105511 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:36:45.105529 kernel: BIOS-provided physical RAM map: Mar 25 01:36:45.105540 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 25 01:36:45.105551 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 25 01:36:45.105561 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Mar 25 01:36:45.105575 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Mar 25 01:36:45.105586 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 25 01:36:45.105600 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 25 01:36:45.105611 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 25 01:36:45.105623 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 25 01:36:45.105634 kernel: printk: bootconsole [earlyser0] enabled Mar 25 01:36:45.105645 kernel: NX (Execute Disable) protection: active Mar 25 01:36:45.105657 kernel: APIC: Static calls initialized Mar 25 01:36:45.105674 kernel: efi: EFI v2.7 by Microsoft Mar 25 01:36:45.105688 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Mar 25 01:36:45.107353 kernel: random: crng init done Mar 25 01:36:45.107368 kernel: secureboot: Secure boot disabled Mar 25 01:36:45.107381 kernel: SMBIOS 3.1.0 present. Mar 25 01:36:45.107395 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Mar 25 01:36:45.107408 kernel: Hypervisor detected: Microsoft Hyper-V Mar 25 01:36:45.107420 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 25 01:36:45.107433 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Mar 25 01:36:45.107445 kernel: Hyper-V: Nested features: 0x1e0101 Mar 25 01:36:45.107458 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 25 01:36:45.107476 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 25 01:36:45.107489 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 25 01:36:45.107503 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 25 01:36:45.107516 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 25 01:36:45.107530 kernel: tsc: Detected 2593.904 MHz processor Mar 25 01:36:45.107543 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:36:45.107556 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:36:45.107569 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 25 01:36:45.107583 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 25 01:36:45.107599 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:36:45.107612 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 25 01:36:45.107625 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 25 01:36:45.107638 kernel: Using GB pages for direct mapping Mar 25 01:36:45.107651 kernel: ACPI: Early table checksum verification disabled Mar 25 01:36:45.107664 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 25 01:36:45.107684 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107822 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107832 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Mar 25 01:36:45.107843 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 25 01:36:45.107852 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107862 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107872 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107881 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107893 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107900 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107910 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:36:45.107918 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 25 01:36:45.107926 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Mar 25 01:36:45.107936 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 25 01:36:45.107944 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 25 01:36:45.107955 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 25 01:36:45.107968 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 25 01:36:45.107976 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 25 01:36:45.107992 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Mar 25 01:36:45.108001 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 25 01:36:45.108010 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Mar 25 01:36:45.108018 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 25 01:36:45.108025 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 25 01:36:45.108033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 25 01:36:45.108044 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 25 01:36:45.108054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 25 01:36:45.108062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 25 01:36:45.108072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 25 01:36:45.108081 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 25 01:36:45.108088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 25 01:36:45.108099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 25 01:36:45.108107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 25 01:36:45.108114 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 25 01:36:45.108125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 25 01:36:45.108135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 25 01:36:45.108145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Mar 25 01:36:45.108154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Mar 25 01:36:45.108162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Mar 25 01:36:45.108172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Mar 25 01:36:45.108181 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 25 01:36:45.108189 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 25 01:36:45.108199 kernel: Zone ranges: Mar 25 01:36:45.108207 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:36:45.108220 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 25 01:36:45.108228 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 25 01:36:45.108236 kernel: Movable zone start for each node Mar 25 01:36:45.108247 kernel: Early memory node ranges Mar 25 01:36:45.108255 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 25 01:36:45.108263 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Mar 25 01:36:45.108274 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 25 01:36:45.108281 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 25 01:36:45.108289 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 25 01:36:45.108302 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:36:45.108310 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 25 01:36:45.108320 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Mar 25 01:36:45.108328 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 25 01:36:45.108339 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 25 01:36:45.108347 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 25 01:36:45.108357 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:36:45.108367 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:36:45.108376 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 25 01:36:45.108388 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 25 01:36:45.108399 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 25 01:36:45.108406 kernel: Booting paravirtualized kernel on Hyper-V Mar 25 01:36:45.108415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:36:45.108426 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 25 01:36:45.108433 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 25 01:36:45.108443 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 25 01:36:45.108452 kernel: pcpu-alloc: [0] 0 1 Mar 25 01:36:45.108459 kernel: Hyper-V: PV spinlocks enabled Mar 25 01:36:45.108472 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 01:36:45.108481 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:36:45.108491 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:36:45.108500 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 25 01:36:45.108507 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:36:45.108519 kernel: Fallback order for Node 0: 0 Mar 25 01:36:45.108527 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Mar 25 01:36:45.108535 kernel: Policy zone: Normal Mar 25 01:36:45.108554 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:36:45.108564 kernel: software IO TLB: area num 2. Mar 25 01:36:45.108576 kernel: Memory: 8065524K/8387460K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 321680K reserved, 0K cma-reserved) Mar 25 01:36:45.108585 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:36:45.108596 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:36:45.108604 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:36:45.108614 kernel: Dynamic Preempt: voluntary Mar 25 01:36:45.108623 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:36:45.108632 kernel: rcu: RCU event tracing is enabled. Mar 25 01:36:45.108641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:36:45.108654 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:36:45.108662 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:36:45.108671 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:36:45.108679 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:36:45.108687 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:36:45.108706 kernel: Using NULL legacy PIC Mar 25 01:36:45.108717 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 25 01:36:45.108725 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:36:45.108733 kernel: Console: colour dummy device 80x25 Mar 25 01:36:45.108741 kernel: printk: console [tty1] enabled Mar 25 01:36:45.108749 kernel: printk: console [ttyS0] enabled Mar 25 01:36:45.108757 kernel: printk: bootconsole [earlyser0] disabled Mar 25 01:36:45.108765 kernel: ACPI: Core revision 20230628 Mar 25 01:36:45.108773 kernel: Failed to register legacy timer interrupt Mar 25 01:36:45.108781 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:36:45.108789 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 25 01:36:45.108802 kernel: Hyper-V: Using IPI hypercalls Mar 25 01:36:45.108810 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 25 01:36:45.108819 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 25 01:36:45.108830 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 25 01:36:45.108838 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 25 01:36:45.108848 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 25 01:36:45.108858 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 25 01:36:45.108866 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Mar 25 01:36:45.108879 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 25 01:36:45.108887 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 25 01:36:45.108897 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:36:45.108907 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 01:36:45.108917 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:36:45.108926 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:36:45.108938 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 25 01:36:45.108947 kernel: RETBleed: Vulnerable Mar 25 01:36:45.108957 kernel: Speculative Store Bypass: Vulnerable Mar 25 01:36:45.108967 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:36:45.108977 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:36:45.108989 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 01:36:45.108997 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 01:36:45.109008 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 01:36:45.109016 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 25 01:36:45.109026 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 25 01:36:45.109036 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 25 01:36:45.109043 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 01:36:45.109055 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 25 01:36:45.109064 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 25 01:36:45.109072 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 25 01:36:45.109083 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 25 01:36:45.109093 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:36:45.109102 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:36:45.109114 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:36:45.109122 kernel: landlock: Up and running. Mar 25 01:36:45.109134 kernel: SELinux: Initializing. Mar 25 01:36:45.109142 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 01:36:45.109151 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 01:36:45.109161 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 25 01:36:45.109170 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:36:45.109180 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:36:45.109192 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:36:45.109200 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 25 01:36:45.109211 kernel: signal: max sigframe size: 3632 Mar 25 01:36:45.109219 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:36:45.109228 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:36:45.109239 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 25 01:36:45.109247 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:36:45.109255 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:36:45.109266 kernel: .... node #0, CPUs: #1 Mar 25 01:36:45.109274 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 25 01:36:45.109289 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 25 01:36:45.109297 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:36:45.109305 kernel: smpboot: Max logical packages: 1 Mar 25 01:36:45.109317 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Mar 25 01:36:45.109325 kernel: devtmpfs: initialized Mar 25 01:36:45.109335 kernel: x86/mm: Memory block size: 128MB Mar 25 01:36:45.109345 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 25 01:36:45.109353 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:36:45.109368 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:36:45.109377 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:36:45.109387 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:36:45.109398 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:36:45.109407 kernel: audit: type=2000 audit(1742866603.027:1): state=initialized audit_enabled=0 res=1 Mar 25 01:36:45.109418 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:36:45.109428 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:36:45.109437 kernel: cpuidle: using governor menu Mar 25 01:36:45.109448 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:36:45.109459 kernel: dca service started, version 1.12.1 Mar 25 01:36:45.109470 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Mar 25 01:36:45.109478 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:36:45.109486 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:36:45.109495 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:36:45.109503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:36:45.109514 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:36:45.109522 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:36:45.109530 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:36:45.109543 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:36:45.109551 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:36:45.109559 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:36:45.109570 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:36:45.109579 kernel: ACPI: Interpreter enabled Mar 25 01:36:45.109589 kernel: ACPI: PM: (supports S0 S5) Mar 25 01:36:45.109598 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:36:45.109606 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:36:45.109617 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 25 01:36:45.109629 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 25 01:36:45.109638 kernel: iommu: Default domain type: Translated Mar 25 01:36:45.109649 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:36:45.109656 kernel: efivars: Registered efivars operations Mar 25 01:36:45.109666 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:36:45.109676 kernel: PCI: System does not support PCI Mar 25 01:36:45.109684 kernel: vgaarb: loaded Mar 25 01:36:45.109700 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 25 01:36:45.109708 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:36:45.109722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:36:45.109730 kernel: pnp: PnP ACPI init Mar 25 01:36:45.109739 kernel: pnp: PnP ACPI: found 3 devices Mar 25 01:36:45.109750 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:36:45.109758 kernel: NET: Registered PF_INET protocol family Mar 25 01:36:45.109769 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 25 01:36:45.109779 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 25 01:36:45.109787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:36:45.109798 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:36:45.109809 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 25 01:36:45.109819 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 25 01:36:45.109829 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 25 01:36:45.109837 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 25 01:36:45.109848 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:36:45.109857 kernel: NET: Registered PF_XDP protocol family Mar 25 01:36:45.109868 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:36:45.109877 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 25 01:36:45.109888 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Mar 25 01:36:45.109901 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 25 01:36:45.109910 kernel: Initialise system trusted keyrings Mar 25 01:36:45.109919 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 25 01:36:45.109931 kernel: Key type asymmetric registered Mar 25 01:36:45.109939 kernel: Asymmetric key parser 'x509' registered Mar 25 01:36:45.109947 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:36:45.109959 kernel: io scheduler mq-deadline registered Mar 25 01:36:45.109967 kernel: io scheduler kyber registered Mar 25 01:36:45.109975 kernel: io scheduler bfq registered Mar 25 01:36:45.109988 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:36:45.109997 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:36:45.110007 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:36:45.110016 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 25 01:36:45.110024 kernel: i8042: PNP: No PS/2 controller found. Mar 25 01:36:45.110171 kernel: rtc_cmos 00:02: registered as rtc0 Mar 25 01:36:45.110273 kernel: rtc_cmos 00:02: setting system clock to 2025-03-25T01:36:44 UTC (1742866604) Mar 25 01:36:45.110371 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 25 01:36:45.110383 kernel: intel_pstate: CPU model not supported Mar 25 01:36:45.110393 kernel: efifb: probing for efifb Mar 25 01:36:45.110405 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 25 01:36:45.110413 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 25 01:36:45.110422 kernel: efifb: scrolling: redraw Mar 25 01:36:45.110432 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 25 01:36:45.110440 kernel: Console: switching to colour frame buffer device 128x48 Mar 25 01:36:45.110451 kernel: fb0: EFI VGA frame buffer device Mar 25 01:36:45.110463 kernel: pstore: Using crash dump compression: deflate Mar 25 01:36:45.110472 kernel: pstore: Registered efi_pstore as persistent store backend Mar 25 01:36:45.110482 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:36:45.110490 kernel: Segment Routing with IPv6 Mar 25 01:36:45.110501 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:36:45.110510 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:36:45.110518 kernel: Key type dns_resolver registered Mar 25 01:36:45.110529 kernel: IPI shorthand broadcast: enabled Mar 25 01:36:45.110537 kernel: sched_clock: Marking stable (834002600, 48442400)->(1107991800, -225546800) Mar 25 01:36:45.110551 kernel: registered taskstats version 1 Mar 25 01:36:45.110560 kernel: Loading compiled-in X.509 certificates Mar 25 01:36:45.110569 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:36:45.110579 kernel: Key type .fscrypt registered Mar 25 01:36:45.110587 kernel: Key type fscrypt-provisioning registered Mar 25 01:36:45.110598 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:36:45.110607 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:36:45.110619 kernel: ima: No architecture policies found Mar 25 01:36:45.110629 kernel: clk: Disabling unused clocks Mar 25 01:36:45.110643 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:36:45.110653 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:36:45.110666 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:36:45.110676 kernel: Run /init as init process Mar 25 01:36:45.110708 kernel: with arguments: Mar 25 01:36:45.110719 kernel: /init Mar 25 01:36:45.110729 kernel: with environment: Mar 25 01:36:45.110739 kernel: HOME=/ Mar 25 01:36:45.110748 kernel: TERM=linux Mar 25 01:36:45.110758 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:36:45.110775 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:36:45.110786 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:36:45.110799 systemd[1]: Detected virtualization microsoft. Mar 25 01:36:45.110808 systemd[1]: Detected architecture x86-64. Mar 25 01:36:45.110819 systemd[1]: Running in initrd. Mar 25 01:36:45.110831 systemd[1]: No hostname configured, using default hostname. Mar 25 01:36:45.110845 systemd[1]: Hostname set to . Mar 25 01:36:45.110858 systemd[1]: Initializing machine ID from random generator. Mar 25 01:36:45.110869 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:36:45.110881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:36:45.110894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:36:45.110907 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:36:45.110921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:36:45.110935 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:36:45.110953 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:36:45.110969 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:36:45.110984 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:36:45.110999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:36:45.111013 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:36:45.111029 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:36:45.111045 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:36:45.111060 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:36:45.111079 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:36:45.111095 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:36:45.111110 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:36:45.111126 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:36:45.111142 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:36:45.111158 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:36:45.111174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:36:45.111190 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:36:45.111208 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:36:45.111224 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:36:45.111240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:36:45.111256 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:36:45.111271 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:36:45.111287 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:36:45.111303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:36:45.111342 systemd-journald[177]: Collecting audit messages is disabled. Mar 25 01:36:45.111381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:36:45.111398 systemd-journald[177]: Journal started Mar 25 01:36:45.111431 systemd-journald[177]: Runtime Journal (/run/log/journal/8572b7d7214540d9b43b035f0f2bf7df) is 8M, max 158.7M, 150.7M free. Mar 25 01:36:45.124729 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:36:45.122237 systemd-modules-load[179]: Inserted module 'overlay' Mar 25 01:36:45.124730 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:36:45.136682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:36:45.145249 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:36:45.150905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:36:45.163984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:36:45.177181 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:36:45.177207 kernel: Bridge firewalling registered Mar 25 01:36:45.178502 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 25 01:36:45.182856 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:36:45.195084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:36:45.198713 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:36:45.208212 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:36:45.211932 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:36:45.214586 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:36:45.223504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:36:45.241094 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:36:45.249991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:36:45.256513 dracut-cmdline[202]: dracut-dracut-053 Mar 25 01:36:45.259063 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:36:45.280296 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:36:45.290833 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:36:45.296263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:36:45.343183 systemd-resolved[247]: Positive Trust Anchors: Mar 25 01:36:45.343199 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:36:45.343261 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:36:45.346560 systemd-resolved[247]: Defaulting to hostname 'linux'. Mar 25 01:36:45.348055 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:36:45.378131 kernel: SCSI subsystem initialized Mar 25 01:36:45.351964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:36:45.388710 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:36:45.400712 kernel: iscsi: registered transport (tcp) Mar 25 01:36:45.422443 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:36:45.422508 kernel: QLogic iSCSI HBA Driver Mar 25 01:36:45.457819 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:36:45.465151 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:36:45.502995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:36:45.503069 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:36:45.507713 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:36:45.546716 kernel: raid6: avx512x4 gen() 18562 MB/s Mar 25 01:36:45.566709 kernel: raid6: avx512x2 gen() 18474 MB/s Mar 25 01:36:45.585706 kernel: raid6: avx512x1 gen() 18358 MB/s Mar 25 01:36:45.604702 kernel: raid6: avx2x4 gen() 18238 MB/s Mar 25 01:36:45.623708 kernel: raid6: avx2x2 gen() 18351 MB/s Mar 25 01:36:45.643980 kernel: raid6: avx2x1 gen() 13653 MB/s Mar 25 01:36:45.644034 kernel: raid6: using algorithm avx512x4 gen() 18562 MB/s Mar 25 01:36:45.665033 kernel: raid6: .... xor() 7776 MB/s, rmw enabled Mar 25 01:36:45.665063 kernel: raid6: using avx512x2 recovery algorithm Mar 25 01:36:45.688716 kernel: xor: automatically using best checksumming function avx Mar 25 01:36:45.829719 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:36:45.838889 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:36:45.845396 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:36:45.866053 systemd-udevd[396]: Using default interface naming scheme 'v255'. Mar 25 01:36:45.871240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:36:45.883625 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:36:45.899532 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 25 01:36:45.927896 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:36:45.931808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:36:45.981541 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:36:45.992853 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:36:46.023625 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:36:46.035013 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:36:46.038788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:36:46.042175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:36:46.056201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:36:46.084709 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 01:36:46.087743 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:36:46.096027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:36:46.096161 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:36:46.102440 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:36:46.112542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:36:46.112760 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:36:46.121745 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:36:46.127915 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 01:36:46.127938 kernel: AES CTR mode by8 optimization enabled Mar 25 01:36:46.131990 kernel: hv_vmbus: Vmbus version:5.2 Mar 25 01:36:46.138566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:36:46.148247 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:36:46.157787 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 25 01:36:46.164357 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 25 01:36:46.164407 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 25 01:36:46.176727 kernel: PTP clock support registered Mar 25 01:36:46.180022 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:36:46.180783 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:36:46.196764 kernel: hv_utils: Registering HyperV Utility Driver Mar 25 01:36:46.196804 kernel: hv_vmbus: registering driver hv_utils Mar 25 01:36:46.194866 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:36:46.203045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:36:46.206080 kernel: hv_utils: Heartbeat IC version 3.0 Mar 25 01:36:46.209198 kernel: hv_utils: Shutdown IC version 3.2 Mar 25 01:36:46.733996 kernel: hv_utils: TimeSync IC version 4.0 Mar 25 01:36:46.734034 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 25 01:36:46.728103 systemd-resolved[247]: Clock change detected. Flushing caches. Mar 25 01:36:46.742296 kernel: hv_vmbus: registering driver hv_netvsc Mar 25 01:36:46.748306 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 25 01:36:46.756310 kernel: hv_vmbus: registering driver hv_storvsc Mar 25 01:36:46.767867 kernel: scsi host0: storvsc_host_t Mar 25 01:36:46.768095 kernel: scsi host1: storvsc_host_t Mar 25 01:36:46.768132 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 25 01:36:46.770893 kernel: hv_vmbus: registering driver hid_hyperv Mar 25 01:36:46.772335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:36:46.785172 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 25 01:36:46.785271 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 25 01:36:46.786567 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:36:46.797331 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 25 01:36:46.820012 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 25 01:36:46.822110 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 25 01:36:46.822141 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 25 01:36:46.822587 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:36:46.843568 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 25 01:36:46.858073 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 25 01:36:46.858304 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 25 01:36:46.858509 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 25 01:36:46.858688 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 25 01:36:46.858863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:36:46.858886 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 25 01:36:46.932449 kernel: hv_netvsc 7ced8d2d-6476-7ced-8d2d-64767ced8d2d eth0: VF slot 1 added Mar 25 01:36:46.940371 kernel: hv_vmbus: registering driver hv_pci Mar 25 01:36:46.945296 kernel: hv_pci 82d36ddd-8025-4a7b-8b92-b03a8acecd5c: PCI VMBus probing: Using version 0x10004 Mar 25 01:36:46.991249 kernel: hv_pci 82d36ddd-8025-4a7b-8b92-b03a8acecd5c: PCI host bridge to bus 8025:00 Mar 25 01:36:46.991465 kernel: pci_bus 8025:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 25 01:36:46.991641 kernel: pci_bus 8025:00: No busn resource found for root bus, will use [bus 00-ff] Mar 25 01:36:46.991795 kernel: pci 8025:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 25 01:36:46.991989 kernel: pci 8025:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 25 01:36:46.992162 kernel: pci 8025:00:02.0: enabling Extended Tags Mar 25 01:36:46.992362 kernel: pci 8025:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8025:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 25 01:36:46.992547 kernel: pci_bus 8025:00: busn_res: [bus 00-ff] end is updated to 00 Mar 25 01:36:46.992712 kernel: pci 8025:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 25 01:36:47.153364 kernel: mlx5_core 8025:00:02.0: enabling device (0000 -> 0002) Mar 25 01:36:47.375959 kernel: mlx5_core 8025:00:02.0: firmware version: 14.30.5000 Mar 25 01:36:47.376193 kernel: hv_netvsc 7ced8d2d-6476-7ced-8d2d-64767ced8d2d eth0: VF registering: eth1 Mar 25 01:36:47.376836 kernel: mlx5_core 8025:00:02.0 eth1: joined to eth0 Mar 25 01:36:47.377045 kernel: mlx5_core 8025:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 25 01:36:47.383315 kernel: mlx5_core 8025:00:02.0 enP32805s1: renamed from eth1 Mar 25 01:36:47.454348 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (457) Mar 25 01:36:47.476244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 25 01:36:47.490762 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 25 01:36:47.534860 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (443) Mar 25 01:36:47.535337 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 25 01:36:47.564218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 25 01:36:47.568293 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 25 01:36:47.580926 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:36:47.604725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:36:47.611302 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:36:48.615387 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:36:48.617001 disk-uuid[607]: The operation has completed successfully. Mar 25 01:36:48.704579 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:36:48.704687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:36:48.738131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:36:48.754356 sh[693]: Success Mar 25 01:36:48.798300 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 25 01:36:48.996428 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:36:49.005367 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:36:49.015038 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:36:49.031297 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:36:49.031338 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:36:49.038609 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:36:49.041565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:36:49.044286 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:36:49.330973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:36:49.333959 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:36:49.337423 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:36:49.351410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:36:49.369303 kernel: BTRFS info (device sda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:36:49.369342 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:36:49.374314 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:36:49.417299 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:36:49.425322 kernel: BTRFS info (device sda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:36:49.429328 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:36:49.439432 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:36:49.452454 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:36:49.458243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:36:49.490914 systemd-networkd[874]: lo: Link UP Mar 25 01:36:49.490924 systemd-networkd[874]: lo: Gained carrier Mar 25 01:36:49.493188 systemd-networkd[874]: Enumeration completed Mar 25 01:36:49.493422 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:36:49.496205 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:36:49.496209 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:36:49.498859 systemd[1]: Reached target network.target - Network. Mar 25 01:36:49.552298 kernel: mlx5_core 8025:00:02.0 enP32805s1: Link up Mar 25 01:36:49.583327 kernel: hv_netvsc 7ced8d2d-6476-7ced-8d2d-64767ced8d2d eth0: Data path switched to VF: enP32805s1 Mar 25 01:36:49.583666 systemd-networkd[874]: enP32805s1: Link UP Mar 25 01:36:49.583790 systemd-networkd[874]: eth0: Link UP Mar 25 01:36:49.583945 systemd-networkd[874]: eth0: Gained carrier Mar 25 01:36:49.583958 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:36:49.588497 systemd-networkd[874]: enP32805s1: Gained carrier Mar 25 01:36:49.612383 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 25 01:36:50.200089 ignition[859]: Ignition 2.20.0 Mar 25 01:36:50.200100 ignition[859]: Stage: fetch-offline Mar 25 01:36:50.201609 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:36:50.200138 ignition[859]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:36:50.210423 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:36:50.200147 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:36:50.200244 ignition[859]: parsed url from cmdline: "" Mar 25 01:36:50.200249 ignition[859]: no config URL provided Mar 25 01:36:50.200255 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:36:50.200264 ignition[859]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:36:50.200270 ignition[859]: failed to fetch config: resource requires networking Mar 25 01:36:50.200669 ignition[859]: Ignition finished successfully Mar 25 01:36:50.232428 ignition[884]: Ignition 2.20.0 Mar 25 01:36:50.232436 ignition[884]: Stage: fetch Mar 25 01:36:50.232706 ignition[884]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:36:50.232717 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:36:50.232831 ignition[884]: parsed url from cmdline: "" Mar 25 01:36:50.232834 ignition[884]: no config URL provided Mar 25 01:36:50.232839 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:36:50.232848 ignition[884]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:36:50.232878 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 25 01:36:50.311473 ignition[884]: GET result: OK Mar 25 01:36:50.311607 ignition[884]: config has been read from IMDS userdata Mar 25 01:36:50.311647 ignition[884]: parsing config with SHA512: 01287f447bac2ce5ef47896a789635dc429f4fcd26f7e31edfde91ebe449c3bf09dc776feaebcdd261e26ff6de1737f7261b7568e0355cccda7a02c0945477bf Mar 25 01:36:50.318568 unknown[884]: fetched base config from "system" Mar 25 01:36:50.318585 unknown[884]: fetched base config from "system" Mar 25 01:36:50.319024 ignition[884]: fetch: fetch complete Mar 25 01:36:50.318594 unknown[884]: fetched user config from "azure" Mar 25 01:36:50.319030 ignition[884]: fetch: fetch passed Mar 25 01:36:50.319074 ignition[884]: Ignition finished successfully Mar 25 01:36:50.332498 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:36:50.338297 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:36:50.361464 ignition[890]: Ignition 2.20.0 Mar 25 01:36:50.361475 ignition[890]: Stage: kargs Mar 25 01:36:50.363728 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:36:50.361675 ignition[890]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:36:50.367409 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:36:50.361687 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:36:50.362563 ignition[890]: kargs: kargs passed Mar 25 01:36:50.362607 ignition[890]: Ignition finished successfully Mar 25 01:36:50.392601 ignition[896]: Ignition 2.20.0 Mar 25 01:36:50.392652 ignition[896]: Stage: disks Mar 25 01:36:50.392871 ignition[896]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:36:50.392880 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:36:50.397547 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:36:50.395571 ignition[896]: disks: disks passed Mar 25 01:36:50.404889 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:36:50.395615 ignition[896]: Ignition finished successfully Mar 25 01:36:50.417136 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:36:50.420677 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:36:50.426597 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:36:50.429646 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:36:50.435399 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:36:50.512631 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 25 01:36:50.518503 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:36:50.526417 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:36:50.622305 kernel: EXT4-fs (sda9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:36:50.623101 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:36:50.625881 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:36:50.664140 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:36:50.669427 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:36:50.678435 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 25 01:36:50.682023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:36:50.682063 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:36:50.688679 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:36:50.703752 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:36:50.710241 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (915) Mar 25 01:36:50.718302 kernel: BTRFS info (device sda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:36:50.718337 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:36:50.722529 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:36:50.728405 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:36:50.729433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:36:51.332822 coreos-metadata[917]: Mar 25 01:36:51.332 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 25 01:36:51.339368 coreos-metadata[917]: Mar 25 01:36:51.339 INFO Fetch successful Mar 25 01:36:51.339368 coreos-metadata[917]: Mar 25 01:36:51.339 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 25 01:36:51.351489 coreos-metadata[917]: Mar 25 01:36:51.351 INFO Fetch successful Mar 25 01:36:51.365771 coreos-metadata[917]: Mar 25 01:36:51.365 INFO wrote hostname ci-4284.0.0-a-b8cd1bf009 to /sysroot/etc/hostname Mar 25 01:36:51.373945 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 01:36:51.395093 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:36:51.430957 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:36:51.448322 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:36:51.453052 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:36:51.535440 systemd-networkd[874]: eth0: Gained IPv6LL Mar 25 01:36:51.599480 systemd-networkd[874]: enP32805s1: Gained IPv6LL Mar 25 01:36:52.199830 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:36:52.207504 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:36:52.211478 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:36:52.234875 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:36:52.241361 kernel: BTRFS info (device sda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:36:52.259230 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:36:52.268191 ignition[1035]: INFO : Ignition 2.20.0 Mar 25 01:36:52.268191 ignition[1035]: INFO : Stage: mount Mar 25 01:36:52.270522 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:36:52.270522 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:36:52.270522 ignition[1035]: INFO : mount: mount passed Mar 25 01:36:52.270522 ignition[1035]: INFO : Ignition finished successfully Mar 25 01:36:52.270000 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:36:52.288349 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:36:52.304672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:36:52.323303 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (1046) Mar 25 01:36:52.327291 kernel: BTRFS info (device sda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:36:52.327335 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:36:52.332411 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:36:52.339307 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:36:52.339497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:36:52.367671 ignition[1063]: INFO : Ignition 2.20.0 Mar 25 01:36:52.367671 ignition[1063]: INFO : Stage: files Mar 25 01:36:52.372695 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:36:52.372695 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:36:52.372695 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:36:52.382447 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:36:52.382447 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:36:52.474298 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:36:52.479094 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:36:52.483012 unknown[1063]: wrote ssh authorized keys file for user: core Mar 25 01:36:52.485845 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:36:52.498190 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:36:52.503536 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 01:36:52.566354 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:36:52.700442 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:36:52.706980 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:36:52.706980 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 25 01:36:53.224624 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:36:53.341961 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:36:53.347441 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:36:53.347441 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:36:53.347441 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:36:53.347441 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:36:53.347441 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:36:53.371756 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:36:53.371756 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:36:53.371756 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:36:53.386175 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:36:53.391067 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:36:53.396112 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:36:53.402973 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:36:53.410030 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:36:53.410030 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 25 01:36:53.937133 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:36:54.843925 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:36:54.843925 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:36:54.859085 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:36:54.868642 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:36:54.868642 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:36:54.868642 ignition[1063]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:36:54.868642 ignition[1063]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:36:54.868642 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:36:54.868642 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:36:54.868642 ignition[1063]: INFO : files: files passed Mar 25 01:36:54.868642 ignition[1063]: INFO : Ignition finished successfully Mar 25 01:36:54.860957 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:36:54.872411 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:36:54.886397 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:36:54.907482 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:36:54.907482 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:36:54.910352 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:36:54.925847 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:36:54.928415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:36:54.934741 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:36:54.939077 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:36:54.948602 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:36:54.998883 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:36:54.998992 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:36:55.006235 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:36:55.012200 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:36:55.015090 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:36:55.017393 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:36:55.043082 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:36:55.050385 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:36:55.070904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:36:55.077236 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:36:55.077454 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:36:55.077887 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:36:55.078002 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:36:55.078767 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:36:55.079213 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:36:55.080240 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:36:55.080719 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:36:55.081174 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:36:55.081640 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:36:55.082086 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:36:55.082572 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:36:55.083129 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:36:55.083711 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:36:55.084142 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:36:55.084253 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:36:55.085087 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:36:55.085543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:36:55.085955 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:36:55.122664 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:36:55.179236 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:36:55.179430 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:36:55.188190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:36:55.188413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:36:55.195105 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:36:55.195243 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:36:55.201233 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 25 01:36:55.201387 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 01:36:55.216363 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:36:55.230917 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:36:55.236250 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:36:55.236454 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:36:55.244327 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:36:55.244438 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:36:55.259512 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:36:55.259629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:36:55.272133 ignition[1117]: INFO : Ignition 2.20.0 Mar 25 01:36:55.272133 ignition[1117]: INFO : Stage: umount Mar 25 01:36:55.272133 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:36:55.272133 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:36:55.272133 ignition[1117]: INFO : umount: umount passed Mar 25 01:36:55.272133 ignition[1117]: INFO : Ignition finished successfully Mar 25 01:36:55.263590 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:36:55.263688 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:36:55.274032 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:36:55.274101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:36:55.297918 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:36:55.297991 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:36:55.303309 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:36:55.303364 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:36:55.306321 systemd[1]: Stopped target network.target - Network. Mar 25 01:36:55.318031 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:36:55.318104 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:36:55.321371 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:36:55.326713 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:36:55.332047 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:36:55.342568 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:36:55.345006 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:36:55.352189 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:36:55.352255 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:36:55.357100 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:36:55.357138 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:36:55.357311 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:36:55.357362 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:36:55.357756 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:36:55.357791 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:36:55.358425 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:36:55.358834 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:36:55.360270 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:36:55.360866 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:36:55.360958 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:36:55.361744 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:36:55.361876 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:36:55.384350 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:36:55.384463 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:36:55.394043 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:36:55.394330 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:36:55.394388 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:36:55.409899 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:36:55.438966 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:36:55.439102 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:36:55.445519 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:36:55.446094 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:36:55.446169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:36:55.453365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:36:55.456355 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:36:55.456416 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:36:55.462074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:36:55.462142 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:36:55.468272 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:36:55.468334 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:36:55.489600 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:36:55.493835 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:36:55.510682 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:36:55.510845 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:36:55.517085 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:36:55.517161 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:36:55.528567 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:36:55.528615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:36:55.537009 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:36:55.537081 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:36:55.542513 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:36:55.542555 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:36:55.559758 kernel: hv_netvsc 7ced8d2d-6476-7ced-8d2d-64767ced8d2d eth0: Data path switched from VF: enP32805s1 Mar 25 01:36:55.547701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:36:55.547758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:36:55.558400 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:36:55.562910 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:36:55.562965 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:36:55.575985 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 25 01:36:55.578906 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:36:55.586410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:36:55.586461 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:36:55.596027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:36:55.601906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:36:55.611230 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:36:55.611407 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:36:55.616876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:36:55.616955 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:36:55.629026 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:36:55.635470 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:36:55.649920 systemd[1]: Switching root. Mar 25 01:36:55.714554 systemd-journald[177]: Journal stopped Mar 25 01:37:00.339313 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 25 01:37:00.339346 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:37:00.339363 kernel: SELinux: policy capability open_perms=1 Mar 25 01:37:00.339372 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:37:00.339382 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:37:00.339391 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:37:00.339400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:37:00.339412 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:37:00.339424 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:37:00.339436 kernel: audit: type=1403 audit(1742866617.453:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:37:00.339446 systemd[1]: Successfully loaded SELinux policy in 118.566ms. Mar 25 01:37:00.339458 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.392ms. Mar 25 01:37:00.339469 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:37:00.339480 systemd[1]: Detected virtualization microsoft. Mar 25 01:37:00.339494 systemd[1]: Detected architecture x86-64. Mar 25 01:37:00.339505 systemd[1]: Detected first boot. Mar 25 01:37:00.339516 systemd[1]: Hostname set to . Mar 25 01:37:00.339525 systemd[1]: Initializing machine ID from random generator. Mar 25 01:37:00.339538 zram_generator::config[1161]: No configuration found. Mar 25 01:37:00.339553 kernel: Guest personality initialized and is inactive Mar 25 01:37:00.339562 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Mar 25 01:37:00.339572 kernel: Initialized host personality Mar 25 01:37:00.339582 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:37:00.339591 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:37:00.339604 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:37:00.339614 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:37:00.339626 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:37:00.339636 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:37:00.339651 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:37:00.339662 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:37:00.339675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:37:00.339685 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:37:00.339700 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:37:00.339711 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:37:00.339722 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:37:00.339735 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:37:00.339747 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:37:00.339758 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:37:00.339768 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:37:00.339781 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:37:00.339795 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:37:00.339807 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:37:00.339817 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:37:00.339832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:37:00.339842 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:37:00.339855 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:37:00.339866 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:37:00.339878 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:37:00.339888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:37:00.339901 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:37:00.339915 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:37:00.339928 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:37:00.339938 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:37:00.339951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:37:00.339962 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:37:00.339974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:37:00.339988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:37:00.340000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:37:00.340011 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:37:00.340023 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:37:00.340033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:37:00.340046 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:37:00.340056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:37:00.340072 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:37:00.340082 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:37:00.340095 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:37:00.340106 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:37:00.340119 systemd[1]: Reached target machines.target - Containers. Mar 25 01:37:00.340129 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:37:00.340143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:37:00.340153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:37:00.340167 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:37:00.340179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:37:00.340192 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:37:00.340203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:37:00.340216 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:37:00.340226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:37:00.340239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:37:00.340250 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:37:00.340262 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:37:00.340283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:37:00.340296 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:37:00.340307 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:37:00.340320 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:37:00.340330 kernel: fuse: init (API version 7.39) Mar 25 01:37:00.340340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:37:00.340350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:37:00.340362 kernel: loop: module loaded Mar 25 01:37:00.340372 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:37:00.340382 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:37:00.340392 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:37:00.340402 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:37:00.340413 systemd[1]: Stopped verity-setup.service. Mar 25 01:37:00.340442 systemd-journald[1261]: Collecting audit messages is disabled. Mar 25 01:37:00.340468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:37:00.340479 systemd-journald[1261]: Journal started Mar 25 01:37:00.340505 systemd-journald[1261]: Runtime Journal (/run/log/journal/854dc5cd3e29401caaa78cef35f05bf2) is 8M, max 158.7M, 150.7M free. Mar 25 01:36:59.774560 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:36:59.782148 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 25 01:36:59.782573 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:37:00.360710 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:37:00.361333 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:37:00.364290 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:37:00.369523 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:37:00.372221 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:37:00.375661 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:37:00.378891 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:37:00.381832 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:37:00.385526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:37:00.389449 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:37:00.389669 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:37:00.393507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:37:00.393733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:37:00.397287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:37:00.397521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:37:00.401436 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:37:00.401664 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:37:00.407049 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:37:00.407347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:37:00.410985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:37:00.414811 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:37:00.419207 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:37:00.425206 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:37:00.445652 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:37:00.454385 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:37:00.468514 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:37:00.476161 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:37:00.476209 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:37:00.482988 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:37:00.503294 kernel: ACPI: bus type drm_connector registered Mar 25 01:37:00.504344 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:37:00.512409 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:37:00.515399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:37:00.524013 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:37:00.528215 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:37:00.531423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:37:00.533469 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:37:00.537381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:37:00.540509 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:37:00.546195 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:37:00.554635 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:37:00.561524 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:37:00.561881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:37:00.565683 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:37:00.569424 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:37:00.572944 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:37:00.578622 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:37:00.582817 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:37:00.591442 systemd-journald[1261]: Time spent on flushing to /var/log/journal/854dc5cd3e29401caaa78cef35f05bf2 is 29.817ms for 979 entries. Mar 25 01:37:00.591442 systemd-journald[1261]: System Journal (/var/log/journal/854dc5cd3e29401caaa78cef35f05bf2) is 8M, max 2.6G, 2.6G free. Mar 25 01:37:00.638128 systemd-journald[1261]: Received client request to flush runtime journal. Mar 25 01:37:00.638183 kernel: loop0: detected capacity change from 0 to 151640 Mar 25 01:37:00.593063 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:37:00.600902 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:37:00.609447 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:37:00.640294 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:37:00.644805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:37:00.650626 udevadm[1314]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 25 01:37:00.679998 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:37:00.680719 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:37:00.715926 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Mar 25 01:37:00.715954 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Mar 25 01:37:00.721630 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:37:00.727163 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:37:00.991026 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:37:00.997530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:37:01.025822 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 25 01:37:01.025848 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 25 01:37:01.030415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:37:01.089309 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:37:01.107515 kernel: loop1: detected capacity change from 0 to 210664 Mar 25 01:37:01.159303 kernel: loop2: detected capacity change from 0 to 109808 Mar 25 01:37:01.509308 kernel: loop3: detected capacity change from 0 to 28424 Mar 25 01:37:01.856314 kernel: loop4: detected capacity change from 0 to 151640 Mar 25 01:37:01.873309 kernel: loop5: detected capacity change from 0 to 210664 Mar 25 01:37:01.885455 kernel: loop6: detected capacity change from 0 to 109808 Mar 25 01:37:01.897316 kernel: loop7: detected capacity change from 0 to 28424 Mar 25 01:37:01.903920 (sd-merge)[1331]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 25 01:37:01.905090 (sd-merge)[1331]: Merged extensions into '/usr'. Mar 25 01:37:01.972505 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:37:01.972527 systemd[1]: Reloading... Mar 25 01:37:02.067302 zram_generator::config[1360]: No configuration found. Mar 25 01:37:02.208788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:37:02.288305 systemd[1]: Reloading finished in 315 ms. Mar 25 01:37:02.306101 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:37:02.310782 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:37:02.320408 systemd[1]: Starting ensure-sysext.service... Mar 25 01:37:02.326181 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:37:02.330880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:37:02.352808 systemd-tmpfiles[1419]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:37:02.353193 systemd-tmpfiles[1419]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:37:02.353953 systemd-tmpfiles[1419]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:37:02.354178 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Mar 25 01:37:02.354226 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Mar 25 01:37:02.365638 systemd[1]: Reload requested from client PID 1418 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:37:02.365852 systemd[1]: Reloading... Mar 25 01:37:02.366387 systemd-tmpfiles[1419]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:37:02.366402 systemd-tmpfiles[1419]: Skipping /boot Mar 25 01:37:02.392136 systemd-tmpfiles[1419]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:37:02.392151 systemd-tmpfiles[1419]: Skipping /boot Mar 25 01:37:02.401892 systemd-udevd[1420]: Using default interface naming scheme 'v255'. Mar 25 01:37:02.470302 zram_generator::config[1453]: No configuration found. Mar 25 01:37:02.606836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:37:02.770303 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:37:02.811556 systemd[1]: Reloading finished in 444 ms. Mar 25 01:37:02.822108 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:37:02.843673 kernel: hv_vmbus: registering driver hv_balloon Mar 25 01:37:02.843759 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 25 01:37:02.849527 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:37:02.850313 kernel: hv_vmbus: registering driver hyperv_fb Mar 25 01:37:02.857421 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 25 01:37:02.863301 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 25 01:37:02.921704 kernel: Console: switching to colour dummy device 80x25 Mar 25 01:37:02.924476 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:37:02.929289 kernel: Console: switching to colour frame buffer device 128x48 Mar 25 01:37:02.927432 systemd[1]: Finished ensure-sysext.service. Mar 25 01:37:02.944121 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Mar 25 01:37:02.947373 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:37:02.950448 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:37:02.967698 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:37:02.971839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:37:02.999132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:37:03.018779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:37:03.037963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:37:03.053958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:37:03.059808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:37:03.059926 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:37:03.066908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:37:03.082506 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:37:03.095558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:37:03.103119 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:37:03.113456 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:37:03.124006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:37:03.127143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:37:03.142885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:37:03.143806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:37:03.149035 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:37:03.150438 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:37:03.153962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:37:03.154134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:37:03.158996 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:37:03.160545 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:37:03.233398 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:37:03.233607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:37:03.244351 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:37:03.259235 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:37:03.349341 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1511) Mar 25 01:37:03.384175 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 25 01:37:03.378923 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:37:03.406128 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:37:03.412252 augenrules[1611]: No rules Mar 25 01:37:03.414420 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:37:03.416209 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:37:03.587999 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 25 01:37:03.593573 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:37:03.599856 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:37:03.603266 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:37:03.632905 systemd-resolved[1561]: Positive Trust Anchors: Mar 25 01:37:03.632924 systemd-resolved[1561]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:37:03.632980 systemd-resolved[1561]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:37:03.642371 systemd-resolved[1561]: Using system hostname 'ci-4284.0.0-a-b8cd1bf009'. Mar 25 01:37:03.644105 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:37:03.644392 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:37:03.652350 systemd-networkd[1560]: lo: Link UP Mar 25 01:37:03.652356 systemd-networkd[1560]: lo: Gained carrier Mar 25 01:37:03.657410 systemd-networkd[1560]: Enumeration completed Mar 25 01:37:03.657507 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:37:03.657709 systemd[1]: Reached target network.target - Network. Mar 25 01:37:03.659637 systemd-networkd[1560]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:37:03.659646 systemd-networkd[1560]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:37:03.660519 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:37:03.671069 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:37:03.685010 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:37:03.675062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:37:03.679081 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:37:03.685865 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:37:03.713155 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:37:03.713735 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:37:03.720924 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:37:03.732846 lvm[1671]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:37:03.737728 kernel: mlx5_core 8025:00:02.0 enP32805s1: Link up Mar 25 01:37:03.756317 kernel: hv_netvsc 7ced8d2d-6476-7ced-8d2d-64767ced8d2d eth0: Data path switched to VF: enP32805s1 Mar 25 01:37:03.759523 systemd-networkd[1560]: enP32805s1: Link UP Mar 25 01:37:03.760391 systemd-networkd[1560]: eth0: Link UP Mar 25 01:37:03.760399 systemd-networkd[1560]: eth0: Gained carrier Mar 25 01:37:03.760441 systemd-networkd[1560]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:37:03.760858 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:37:03.764896 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:37:03.765595 systemd-networkd[1560]: enP32805s1: Gained carrier Mar 25 01:37:03.788477 systemd-networkd[1560]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 25 01:37:03.842613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:37:05.359575 systemd-networkd[1560]: eth0: Gained IPv6LL Mar 25 01:37:05.362851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:37:05.367024 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:37:05.743432 systemd-networkd[1560]: enP32805s1: Gained IPv6LL Mar 25 01:37:06.468334 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:37:06.478353 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:37:06.483217 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:37:06.503052 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:37:06.506742 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:37:06.509731 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:37:06.513335 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:37:06.517004 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:37:06.519970 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:37:06.523431 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:37:06.526962 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:37:06.527004 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:37:06.529418 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:37:06.547568 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:37:06.552376 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:37:06.557905 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:37:06.561743 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:37:06.565283 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:37:06.570576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:37:06.574420 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:37:06.578272 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:37:06.581507 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:37:06.584138 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:37:06.586771 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:37:06.586801 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:37:06.589170 systemd[1]: Starting chronyd.service - NTP client/server... Mar 25 01:37:06.595376 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:37:06.600008 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:37:06.612518 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:37:06.620291 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:37:06.630179 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:37:06.634766 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:37:06.634822 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 25 01:37:06.637530 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 25 01:37:06.640660 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 25 01:37:06.643850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:06.654855 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:37:06.656913 KVP[1693]: KVP starting; pid is:1693 Mar 25 01:37:06.661533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:37:06.666097 jq[1691]: false Mar 25 01:37:06.671690 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:37:06.676048 KVP[1693]: KVP LIC Version: 3.1 Mar 25 01:37:06.676375 kernel: hv_utils: KVP IC version 4.0 Mar 25 01:37:06.682515 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:37:06.693493 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:37:06.699063 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 25 01:37:06.711529 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:37:06.719919 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:37:06.722588 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:37:06.723568 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:37:06.729380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:37:06.739542 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:37:06.740376 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:37:06.741926 chronyd[1716]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 25 01:37:06.744487 extend-filesystems[1692]: Found loop4 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found loop5 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found loop6 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found loop7 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda1 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda2 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda3 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found usr Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda4 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda6 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda7 Mar 25 01:37:06.746957 extend-filesystems[1692]: Found sda9 Mar 25 01:37:06.746957 extend-filesystems[1692]: Checking size of /dev/sda9 Mar 25 01:37:06.772610 chronyd[1716]: Timezone right/UTC failed leap second check, ignoring Mar 25 01:37:06.772809 chronyd[1716]: Loaded seccomp filter (level 2) Mar 25 01:37:06.779773 jq[1711]: true Mar 25 01:37:06.780334 systemd[1]: Started chronyd.service - NTP client/server. Mar 25 01:37:06.783731 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:37:06.783961 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:37:06.787294 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:37:06.787551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:37:06.805203 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:37:06.817759 extend-filesystems[1692]: Old size kept for /dev/sda9 Mar 25 01:37:06.817759 extend-filesystems[1692]: Found sr0 Mar 25 01:37:06.816045 dbus-daemon[1687]: [system] SELinux support is enabled Mar 25 01:37:06.825679 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:37:06.848441 update_engine[1709]: I20250325 01:37:06.841769 1709 main.cc:92] Flatcar Update Engine starting Mar 25 01:37:06.839399 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:37:06.839647 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:37:06.889331 jq[1723]: true Mar 25 01:37:06.875858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:37:06.889603 update_engine[1709]: I20250325 01:37:06.869498 1709 update_check_scheduler.cc:74] Next update check in 8m12s Mar 25 01:37:06.875909 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:37:06.884671 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:37:06.884694 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:37:06.892531 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:37:06.895623 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:37:06.905527 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:37:06.922780 tar[1722]: linux-amd64/helm Mar 25 01:37:06.965325 coreos-metadata[1686]: Mar 25 01:37:06.965 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 25 01:37:06.973962 coreos-metadata[1686]: Mar 25 01:37:06.973 INFO Fetch successful Mar 25 01:37:06.976181 coreos-metadata[1686]: Mar 25 01:37:06.976 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 25 01:37:06.982394 coreos-metadata[1686]: Mar 25 01:37:06.981 INFO Fetch successful Mar 25 01:37:06.983619 coreos-metadata[1686]: Mar 25 01:37:06.983 INFO Fetching http://168.63.129.16/machine/a9811f82-ea82-4899-9384-9e4cc961b561/b1baf83c%2D0851%2D48b8%2Da4df%2D6c14cf7e97f1.%5Fci%2D4284.0.0%2Da%2Db8cd1bf009?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 25 01:37:06.991712 coreos-metadata[1686]: Mar 25 01:37:06.991 INFO Fetch successful Mar 25 01:37:06.991712 coreos-metadata[1686]: Mar 25 01:37:06.991 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 25 01:37:07.002763 systemd-logind[1704]: New seat seat0. Mar 25 01:37:07.006864 coreos-metadata[1686]: Mar 25 01:37:07.004 INFO Fetch successful Mar 25 01:37:07.018463 systemd-logind[1704]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:37:07.018679 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:37:07.051813 bash[1763]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:37:07.055143 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:37:07.060080 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 25 01:37:07.086702 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:37:07.093183 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:37:07.143316 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1771) Mar 25 01:37:07.330691 locksmithd[1748]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:37:07.773625 sshd_keygen[1721]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:37:07.808817 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:37:07.814531 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:37:07.821095 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 25 01:37:07.863683 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:37:07.865651 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:37:07.882964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:37:07.896242 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 25 01:37:07.922779 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:37:07.932435 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:37:07.946851 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:37:07.949958 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:37:07.985067 tar[1722]: linux-amd64/LICENSE Mar 25 01:37:07.985530 tar[1722]: linux-amd64/README.md Mar 25 01:37:08.005089 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:37:08.079693 containerd[1729]: time="2025-03-25T01:37:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:37:08.080954 containerd[1729]: time="2025-03-25T01:37:08.080907300Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.094974400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.6µs" Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095020900Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095048000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095244900Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095296800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095348200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095431100Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095446600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095778200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095805300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095822600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:37:08.095997 containerd[1729]: time="2025-03-25T01:37:08.095836500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:37:08.096579 containerd[1729]: time="2025-03-25T01:37:08.095959100Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:37:08.096579 containerd[1729]: time="2025-03-25T01:37:08.096193800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:37:08.096579 containerd[1729]: time="2025-03-25T01:37:08.096235200Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:37:08.096579 containerd[1729]: time="2025-03-25T01:37:08.096249200Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:37:08.096579 containerd[1729]: time="2025-03-25T01:37:08.096321800Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:37:08.097084 containerd[1729]: time="2025-03-25T01:37:08.096768000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:37:08.097084 containerd[1729]: time="2025-03-25T01:37:08.096861400Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:37:08.110510 containerd[1729]: time="2025-03-25T01:37:08.110433200Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110646700Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110676200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110733700Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110762800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110781500Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110797600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110815000Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110833900Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110849700Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110863400Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:37:08.110938 containerd[1729]: time="2025-03-25T01:37:08.110880500Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111041400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111082300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111112300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111131300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111147800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111162400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111180500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111196600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111228300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111245000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:37:08.111328 containerd[1729]: time="2025-03-25T01:37:08.111261100Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:37:08.111686 containerd[1729]: time="2025-03-25T01:37:08.111347100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:37:08.111686 containerd[1729]: time="2025-03-25T01:37:08.111367300Z" level=info msg="Start snapshots syncer" Mar 25 01:37:08.111686 containerd[1729]: time="2025-03-25T01:37:08.111430900Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:37:08.112899 containerd[1729]: time="2025-03-25T01:37:08.111810500Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:37:08.112899 containerd[1729]: time="2025-03-25T01:37:08.111884700Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112010000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112205100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112246700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112264600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112321600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112355200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112376900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112394800Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112429600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112450600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112464900Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112532300Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112555400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:37:08.113132 containerd[1729]: time="2025-03-25T01:37:08.112626900Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112646400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112660100Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112675700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112691900Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112722300Z" level=info msg="runtime interface created" Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112731700Z" level=info msg="created NRI interface" Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112745800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112764700Z" level=info msg="Connect containerd service" Mar 25 01:37:08.113609 containerd[1729]: time="2025-03-25T01:37:08.112806600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:37:08.113884 containerd[1729]: time="2025-03-25T01:37:08.113822400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:37:08.380896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:08.394729 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:37:08.941776 containerd[1729]: time="2025-03-25T01:37:08.941686600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:37:08.941776 containerd[1729]: time="2025-03-25T01:37:08.941762900Z" level=info msg="Start subscribing containerd event" Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.941826400Z" level=info msg="Start recovering state" Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942134200Z" level=info msg="Start event monitor" Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942156300Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942167200Z" level=info msg="Start streaming server" Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942180500Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942191700Z" level=info msg="runtime interface starting up..." Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942200500Z" level=info msg="starting plugins..." Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942218100Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.941999700Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:37:08.946386 containerd[1729]: time="2025-03-25T01:37:08.942389100Z" level=info msg="containerd successfully booted in 0.863349s" Mar 25 01:37:08.942809 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:37:08.950860 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:37:08.956901 systemd[1]: Startup finished in 3.880s (firmware) + 27.226s (loader) + 977ms (kernel) + 12.119s (initrd) + 11.621s (userspace) = 55.824s. Mar 25 01:37:09.109967 kubelet[1881]: E0325 01:37:09.109929 1881 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:37:09.112494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:37:09.112682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:37:09.113107 systemd[1]: kubelet.service: Consumed 955ms CPU time, 245.4M memory peak. Mar 25 01:37:09.302054 login[1866]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 01:37:09.304751 login[1867]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 01:37:09.317563 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:37:09.323554 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:37:09.338339 systemd-logind[1704]: New session 2 of user core. Mar 25 01:37:09.347334 systemd-logind[1704]: New session 1 of user core. Mar 25 01:37:09.352548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:37:09.355467 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:37:09.381794 (systemd)[1904]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:37:09.384400 systemd-logind[1704]: New session c1 of user core. Mar 25 01:37:09.522299 waagent[1863]: 2025-03-25T01:37:09.520853Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 25 01:37:09.522299 waagent[1863]: 2025-03-25T01:37:09.521312Z INFO Daemon Daemon OS: flatcar 4284.0.0 Mar 25 01:37:09.522299 waagent[1863]: 2025-03-25T01:37:09.521839Z INFO Daemon Daemon Python: 3.11.11 Mar 25 01:37:09.523249 waagent[1863]: 2025-03-25T01:37:09.523204Z INFO Daemon Daemon Run daemon Mar 25 01:37:09.524092 waagent[1863]: 2025-03-25T01:37:09.524059Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' Mar 25 01:37:09.524987 waagent[1863]: 2025-03-25T01:37:09.524955Z INFO Daemon Daemon Using waagent for provisioning Mar 25 01:37:09.526075 waagent[1863]: 2025-03-25T01:37:09.526041Z INFO Daemon Daemon Activate resource disk Mar 25 01:37:09.527015 waagent[1863]: 2025-03-25T01:37:09.526984Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 25 01:37:09.531476 waagent[1863]: 2025-03-25T01:37:09.531440Z INFO Daemon Daemon Found device: None Mar 25 01:37:09.531745 waagent[1863]: 2025-03-25T01:37:09.531716Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 25 01:37:09.532636 waagent[1863]: 2025-03-25T01:37:09.532610Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 25 01:37:09.533665 waagent[1863]: 2025-03-25T01:37:09.533628Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 25 01:37:09.534651 waagent[1863]: 2025-03-25T01:37:09.534621Z INFO Daemon Daemon Running default provisioning handler Mar 25 01:37:09.564414 waagent[1863]: 2025-03-25T01:37:09.563395Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 25 01:37:09.565332 waagent[1863]: 2025-03-25T01:37:09.565275Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 25 01:37:09.566160 waagent[1863]: 2025-03-25T01:37:09.566129Z INFO Daemon Daemon cloud-init is enabled: False Mar 25 01:37:09.567122 waagent[1863]: 2025-03-25T01:37:09.567094Z INFO Daemon Daemon Copying ovf-env.xml Mar 25 01:37:09.652957 systemd[1904]: Queued start job for default target default.target. Mar 25 01:37:09.660331 systemd[1904]: Created slice app.slice - User Application Slice. Mar 25 01:37:09.660366 systemd[1904]: Reached target paths.target - Paths. Mar 25 01:37:09.660418 systemd[1904]: Reached target timers.target - Timers. Mar 25 01:37:09.661716 systemd[1904]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:37:09.672809 systemd[1904]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:37:09.672875 systemd[1904]: Reached target sockets.target - Sockets. Mar 25 01:37:09.672923 systemd[1904]: Reached target basic.target - Basic System. Mar 25 01:37:09.672969 systemd[1904]: Reached target default.target - Main User Target. Mar 25 01:37:09.673004 systemd[1904]: Startup finished in 279ms. Mar 25 01:37:09.673178 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:37:09.680679 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:37:09.683105 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:37:09.700981 waagent[1863]: 2025-03-25T01:37:09.700900Z INFO Daemon Daemon Successfully mounted dvd Mar 25 01:37:09.727042 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 25 01:37:09.732026 waagent[1863]: 2025-03-25T01:37:09.730761Z INFO Daemon Daemon Detect protocol endpoint Mar 25 01:37:09.732026 waagent[1863]: 2025-03-25T01:37:09.731011Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 25 01:37:09.732435 waagent[1863]: 2025-03-25T01:37:09.732399Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 25 01:37:09.733186 waagent[1863]: 2025-03-25T01:37:09.733158Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 25 01:37:09.734211 waagent[1863]: 2025-03-25T01:37:09.734178Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 25 01:37:09.734613 waagent[1863]: 2025-03-25T01:37:09.734585Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 25 01:37:09.759409 waagent[1863]: 2025-03-25T01:37:09.759374Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 25 01:37:09.759897 waagent[1863]: 2025-03-25T01:37:09.759876Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 25 01:37:09.760275 waagent[1863]: 2025-03-25T01:37:09.760250Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 25 01:37:09.907451 waagent[1863]: 2025-03-25T01:37:09.907305Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 25 01:37:09.911132 waagent[1863]: 2025-03-25T01:37:09.911068Z INFO Daemon Daemon Forcing an update of the goal state. Mar 25 01:37:09.917251 waagent[1863]: 2025-03-25T01:37:09.917198Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 25 01:37:09.956861 waagent[1863]: 2025-03-25T01:37:09.956793Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 25 01:37:09.976638 waagent[1863]: 2025-03-25T01:37:09.957640Z INFO Daemon Mar 25 01:37:09.976638 waagent[1863]: 2025-03-25T01:37:09.958464Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 982595bb-fdc3-4d08-b90a-ec42b931019c eTag: 11662508773284741372 source: Fabric] Mar 25 01:37:09.976638 waagent[1863]: 2025-03-25T01:37:09.960171Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 25 01:37:09.976638 waagent[1863]: 2025-03-25T01:37:09.961077Z INFO Daemon Mar 25 01:37:09.976638 waagent[1863]: 2025-03-25T01:37:09.961881Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 25 01:37:09.976638 waagent[1863]: 2025-03-25T01:37:09.967242Z INFO Daemon Daemon Downloading artifacts profile blob Mar 25 01:37:10.041298 waagent[1863]: 2025-03-25T01:37:10.041201Z INFO Daemon Downloaded certificate {'thumbprint': '8D0755C5F996FBF5C0D46CD2F2C9011DF28F12F7', 'hasPrivateKey': False} Mar 25 01:37:10.046441 waagent[1863]: 2025-03-25T01:37:10.046387Z INFO Daemon Downloaded certificate {'thumbprint': '8CBA7404DA26697F91B20EF5A07BF2FD0FB827F0', 'hasPrivateKey': True} Mar 25 01:37:10.053612 waagent[1863]: 2025-03-25T01:37:10.046860Z INFO Daemon Fetch goal state completed Mar 25 01:37:10.059291 waagent[1863]: 2025-03-25T01:37:10.059238Z INFO Daemon Daemon Starting provisioning Mar 25 01:37:10.067414 waagent[1863]: 2025-03-25T01:37:10.059480Z INFO Daemon Daemon Handle ovf-env.xml. Mar 25 01:37:10.067414 waagent[1863]: 2025-03-25T01:37:10.060147Z INFO Daemon Daemon Set hostname [ci-4284.0.0-a-b8cd1bf009] Mar 25 01:37:10.098396 waagent[1863]: 2025-03-25T01:37:10.098311Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-a-b8cd1bf009] Mar 25 01:37:10.106475 waagent[1863]: 2025-03-25T01:37:10.098966Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 25 01:37:10.106475 waagent[1863]: 2025-03-25T01:37:10.099928Z INFO Daemon Daemon Primary interface is [eth0] Mar 25 01:37:10.109273 systemd-networkd[1560]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:37:10.109299 systemd-networkd[1560]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:37:10.109347 systemd-networkd[1560]: eth0: DHCP lease lost Mar 25 01:37:10.110442 waagent[1863]: 2025-03-25T01:37:10.110387Z INFO Daemon Daemon Create user account if not exists Mar 25 01:37:10.123975 waagent[1863]: 2025-03-25T01:37:10.113395Z INFO Daemon Daemon User core already exists, skip useradd Mar 25 01:37:10.123975 waagent[1863]: 2025-03-25T01:37:10.113572Z INFO Daemon Daemon Configure sudoer Mar 25 01:37:10.123975 waagent[1863]: 2025-03-25T01:37:10.114271Z INFO Daemon Daemon Configure sshd Mar 25 01:37:10.123975 waagent[1863]: 2025-03-25T01:37:10.115224Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 25 01:37:10.123975 waagent[1863]: 2025-03-25T01:37:10.115959Z INFO Daemon Daemon Deploy ssh public key. Mar 25 01:37:10.168353 systemd-networkd[1560]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 25 01:37:11.229750 waagent[1863]: 2025-03-25T01:37:11.229674Z INFO Daemon Daemon Provisioning complete Mar 25 01:37:11.242728 waagent[1863]: 2025-03-25T01:37:11.242670Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 25 01:37:11.250483 waagent[1863]: 2025-03-25T01:37:11.242979Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 25 01:37:11.250483 waagent[1863]: 2025-03-25T01:37:11.244008Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 25 01:37:11.372304 waagent[1960]: 2025-03-25T01:37:11.372198Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 25 01:37:11.372738 waagent[1960]: 2025-03-25T01:37:11.372368Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 Mar 25 01:37:11.372738 waagent[1960]: 2025-03-25T01:37:11.372446Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 25 01:37:11.372738 waagent[1960]: 2025-03-25T01:37:11.372519Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Mar 25 01:37:11.438507 waagent[1960]: 2025-03-25T01:37:11.438413Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 25 01:37:11.438765 waagent[1960]: 2025-03-25T01:37:11.438712Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:37:11.438864 waagent[1960]: 2025-03-25T01:37:11.438825Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:37:11.446844 waagent[1960]: 2025-03-25T01:37:11.446784Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 25 01:37:11.452970 waagent[1960]: 2025-03-25T01:37:11.452916Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 25 01:37:11.453447 waagent[1960]: 2025-03-25T01:37:11.453396Z INFO ExtHandler Mar 25 01:37:11.453531 waagent[1960]: 2025-03-25T01:37:11.453484Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: cd29471b-4812-4a69-a70e-6da306c85854 eTag: 11662508773284741372 source: Fabric] Mar 25 01:37:11.453824 waagent[1960]: 2025-03-25T01:37:11.453777Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 25 01:37:11.454348 waagent[1960]: 2025-03-25T01:37:11.454301Z INFO ExtHandler Mar 25 01:37:11.454412 waagent[1960]: 2025-03-25T01:37:11.454378Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 25 01:37:11.458656 waagent[1960]: 2025-03-25T01:37:11.458620Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 25 01:37:11.537028 waagent[1960]: 2025-03-25T01:37:11.536899Z INFO ExtHandler Downloaded certificate {'thumbprint': '8D0755C5F996FBF5C0D46CD2F2C9011DF28F12F7', 'hasPrivateKey': False} Mar 25 01:37:11.537433 waagent[1960]: 2025-03-25T01:37:11.537386Z INFO ExtHandler Downloaded certificate {'thumbprint': '8CBA7404DA26697F91B20EF5A07BF2FD0FB827F0', 'hasPrivateKey': True} Mar 25 01:37:11.537864 waagent[1960]: 2025-03-25T01:37:11.537821Z INFO ExtHandler Fetch goal state completed Mar 25 01:37:11.550962 waagent[1960]: 2025-03-25T01:37:11.550906Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Mar 25 01:37:11.555808 waagent[1960]: 2025-03-25T01:37:11.555749Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1960 Mar 25 01:37:11.555949 waagent[1960]: 2025-03-25T01:37:11.555914Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 25 01:37:11.556329 waagent[1960]: 2025-03-25T01:37:11.556254Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 25 01:37:11.557735 waagent[1960]: 2025-03-25T01:37:11.557689Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 25 01:37:11.558143 waagent[1960]: 2025-03-25T01:37:11.558100Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 25 01:37:11.558306 waagent[1960]: 2025-03-25T01:37:11.558256Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 25 01:37:11.558911 waagent[1960]: 2025-03-25T01:37:11.558866Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 25 01:37:11.595249 waagent[1960]: 2025-03-25T01:37:11.595200Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 25 01:37:11.595491 waagent[1960]: 2025-03-25T01:37:11.595446Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 25 01:37:11.602332 waagent[1960]: 2025-03-25T01:37:11.602086Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 25 01:37:11.609224 systemd[1]: Reload requested from client PID 1977 ('systemctl') (unit waagent.service)... Mar 25 01:37:11.609241 systemd[1]: Reloading... Mar 25 01:37:11.700313 zram_generator::config[2013]: No configuration found. Mar 25 01:37:11.842455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:37:11.957755 systemd[1]: Reloading finished in 348 ms. Mar 25 01:37:11.982297 waagent[1960]: 2025-03-25T01:37:11.978813Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 25 01:37:11.982297 waagent[1960]: 2025-03-25T01:37:11.979120Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 25 01:37:12.998869 waagent[1960]: 2025-03-25T01:37:12.998774Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 25 01:37:12.999379 waagent[1960]: 2025-03-25T01:37:12.999319Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 25 01:37:13.000232 waagent[1960]: 2025-03-25T01:37:13.000163Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 25 01:37:13.000627 waagent[1960]: 2025-03-25T01:37:13.000581Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:37:13.000800 waagent[1960]: 2025-03-25T01:37:13.000749Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 25 01:37:13.000993 waagent[1960]: 2025-03-25T01:37:13.000955Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:37:13.001274 waagent[1960]: 2025-03-25T01:37:13.001233Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:37:13.001552 waagent[1960]: 2025-03-25T01:37:13.001511Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 25 01:37:13.001788 waagent[1960]: 2025-03-25T01:37:13.001744Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 25 01:37:13.001881 waagent[1960]: 2025-03-25T01:37:13.001815Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:37:13.001967 waagent[1960]: 2025-03-25T01:37:13.001896Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 25 01:37:13.002107 waagent[1960]: 2025-03-25T01:37:13.002068Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 25 01:37:13.002107 waagent[1960]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 25 01:37:13.002107 waagent[1960]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 25 01:37:13.002107 waagent[1960]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 25 01:37:13.002107 waagent[1960]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:37:13.002107 waagent[1960]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:37:13.002107 waagent[1960]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:37:13.002383 waagent[1960]: 2025-03-25T01:37:13.002299Z INFO EnvHandler ExtHandler Configure routes Mar 25 01:37:13.002422 waagent[1960]: 2025-03-25T01:37:13.002394Z INFO EnvHandler ExtHandler Gateway:None Mar 25 01:37:13.002507 waagent[1960]: 2025-03-25T01:37:13.002457Z INFO EnvHandler ExtHandler Routes:None Mar 25 01:37:13.003342 waagent[1960]: 2025-03-25T01:37:13.003259Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 25 01:37:13.003532 waagent[1960]: 2025-03-25T01:37:13.003488Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 25 01:37:13.003938 waagent[1960]: 2025-03-25T01:37:13.003884Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 25 01:37:13.010239 waagent[1960]: 2025-03-25T01:37:13.010193Z INFO ExtHandler ExtHandler Mar 25 01:37:13.010622 waagent[1960]: 2025-03-25T01:37:13.010579Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f7c517fe-d526-4f70-bf1f-47bc265ba6d6 correlation 67a44ab6-87ff-4e5d-acbb-45c4cda0a4c9 created: 2025-03-25T01:36:03.066589Z] Mar 25 01:37:13.011861 waagent[1960]: 2025-03-25T01:37:13.011825Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 25 01:37:13.013924 waagent[1960]: 2025-03-25T01:37:13.013888Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Mar 25 01:37:13.059164 waagent[1960]: 2025-03-25T01:37:13.059108Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AE727C19-01B8-4A1F-B736-FAE039FBBA22;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 25 01:37:13.074533 waagent[1960]: 2025-03-25T01:37:13.074463Z INFO MonitorHandler ExtHandler Network interfaces: Mar 25 01:37:13.074533 waagent[1960]: Executing ['ip', '-a', '-o', 'link']: Mar 25 01:37:13.074533 waagent[1960]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 25 01:37:13.074533 waagent[1960]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:64:76 brd ff:ff:ff:ff:ff:ff Mar 25 01:37:13.074533 waagent[1960]: 3: enP32805s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:64:76 brd ff:ff:ff:ff:ff:ff\ altname enP32805p0s2 Mar 25 01:37:13.074533 waagent[1960]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 25 01:37:13.074533 waagent[1960]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 25 01:37:13.074533 waagent[1960]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 25 01:37:13.074533 waagent[1960]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 25 01:37:13.074533 waagent[1960]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 25 01:37:13.074533 waagent[1960]: 2: eth0 inet6 fe80::7eed:8dff:fe2d:6476/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 25 01:37:13.074533 waagent[1960]: 3: enP32805s1 inet6 fe80::7eed:8dff:fe2d:6476/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 25 01:37:13.115581 waagent[1960]: 2025-03-25T01:37:13.115522Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 25 01:37:13.115581 waagent[1960]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:37:13.115581 waagent[1960]: pkts bytes target prot opt in out source destination Mar 25 01:37:13.115581 waagent[1960]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:37:13.115581 waagent[1960]: pkts bytes target prot opt in out source destination Mar 25 01:37:13.115581 waagent[1960]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:37:13.115581 waagent[1960]: pkts bytes target prot opt in out source destination Mar 25 01:37:13.115581 waagent[1960]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 25 01:37:13.115581 waagent[1960]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 25 01:37:13.115581 waagent[1960]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 25 01:37:13.121024 waagent[1960]: 2025-03-25T01:37:13.120972Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 25 01:37:13.121024 waagent[1960]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:37:13.121024 waagent[1960]: pkts bytes target prot opt in out source destination Mar 25 01:37:13.121024 waagent[1960]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:37:13.121024 waagent[1960]: pkts bytes target prot opt in out source destination Mar 25 01:37:13.121024 waagent[1960]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:37:13.121024 waagent[1960]: pkts bytes target prot opt in out source destination Mar 25 01:37:13.121024 waagent[1960]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 25 01:37:13.121024 waagent[1960]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 25 01:37:13.121024 waagent[1960]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 25 01:37:13.121425 waagent[1960]: 2025-03-25T01:37:13.121263Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 25 01:37:19.182201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:37:19.184362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:19.304884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:19.312593 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:37:19.906781 kubelet[2114]: E0325 01:37:19.906706 2114 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:37:19.910399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:37:19.910585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:37:19.911016 systemd[1]: kubelet.service: Consumed 156ms CPU time, 97M memory peak. Mar 25 01:37:19.964888 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:37:19.966239 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:44518.service - OpenSSH per-connection server daemon (10.200.16.10:44518). Mar 25 01:37:20.732894 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 44518 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:37:20.734599 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:20.739768 systemd-logind[1704]: New session 3 of user core. Mar 25 01:37:20.750440 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:37:21.312064 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:44526.service - OpenSSH per-connection server daemon (10.200.16.10:44526). Mar 25 01:37:21.946336 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 44526 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:37:21.947980 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:21.952157 systemd-logind[1704]: New session 4 of user core. Mar 25 01:37:21.962438 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:37:22.390111 sshd[2130]: Connection closed by 10.200.16.10 port 44526 Mar 25 01:37:22.391367 sshd-session[2128]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:22.394697 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:44526.service: Deactivated successfully. Mar 25 01:37:22.397015 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:37:22.398759 systemd-logind[1704]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:37:22.399882 systemd-logind[1704]: Removed session 4. Mar 25 01:37:22.501076 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:44536.service - OpenSSH per-connection server daemon (10.200.16.10:44536). Mar 25 01:37:23.135923 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 44536 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:37:23.137619 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:23.143642 systemd-logind[1704]: New session 5 of user core. Mar 25 01:37:23.148645 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:37:23.578316 sshd[2138]: Connection closed by 10.200.16.10 port 44536 Mar 25 01:37:23.579139 sshd-session[2136]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:23.582598 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:44536.service: Deactivated successfully. Mar 25 01:37:23.584732 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:37:23.586402 systemd-logind[1704]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:37:23.587364 systemd-logind[1704]: Removed session 5. Mar 25 01:37:23.690136 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:44544.service - OpenSSH per-connection server daemon (10.200.16.10:44544). Mar 25 01:37:24.320683 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 44544 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:37:24.322399 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:24.327392 systemd-logind[1704]: New session 6 of user core. Mar 25 01:37:24.336446 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:37:24.765231 sshd[2146]: Connection closed by 10.200.16.10 port 44544 Mar 25 01:37:24.766045 sshd-session[2144]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:24.770198 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:44544.service: Deactivated successfully. Mar 25 01:37:24.772083 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:37:24.772808 systemd-logind[1704]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:37:24.773673 systemd-logind[1704]: Removed session 6. Mar 25 01:37:24.877078 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:44560.service - OpenSSH per-connection server daemon (10.200.16.10:44560). Mar 25 01:37:25.508098 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 44560 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:37:25.509776 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:25.514583 systemd-logind[1704]: New session 7 of user core. Mar 25 01:37:25.521435 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:37:26.006803 sudo[2155]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:37:26.007167 sudo[2155]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:37:26.033112 sudo[2155]: pam_unix(sudo:session): session closed for user root Mar 25 01:37:26.135026 sshd[2154]: Connection closed by 10.200.16.10 port 44560 Mar 25 01:37:26.136223 sshd-session[2152]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:26.140021 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:44560.service: Deactivated successfully. Mar 25 01:37:26.142379 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:37:26.144413 systemd-logind[1704]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:37:26.145555 systemd-logind[1704]: Removed session 7. Mar 25 01:37:26.247165 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:44564.service - OpenSSH per-connection server daemon (10.200.16.10:44564). Mar 25 01:37:26.884483 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 44564 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:37:26.886200 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:26.891184 systemd-logind[1704]: New session 8 of user core. Mar 25 01:37:26.901424 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:37:27.231493 sudo[2165]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:37:27.231853 sudo[2165]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:37:27.235199 sudo[2165]: pam_unix(sudo:session): session closed for user root Mar 25 01:37:27.240161 sudo[2164]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:37:27.240526 sudo[2164]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:37:27.249536 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:37:27.281110 augenrules[2187]: No rules Mar 25 01:37:27.282507 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:37:27.282755 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:37:27.284362 sudo[2164]: pam_unix(sudo:session): session closed for user root Mar 25 01:37:27.395162 sshd[2163]: Connection closed by 10.200.16.10 port 44564 Mar 25 01:37:27.395980 sshd-session[2161]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:27.400873 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:44564.service: Deactivated successfully. Mar 25 01:37:27.402930 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:37:27.403775 systemd-logind[1704]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:37:27.404715 systemd-logind[1704]: Removed session 8. Mar 25 01:37:27.510227 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:44578.service - OpenSSH per-connection server daemon (10.200.16.10:44578). Mar 25 01:37:28.145448 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 44578 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:37:28.147069 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:28.151232 systemd-logind[1704]: New session 9 of user core. Mar 25 01:37:28.154413 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:37:28.490246 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:37:28.490656 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:37:29.932168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:37:29.936510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:30.103938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:30.112598 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:37:30.565009 chronyd[1716]: Selected source PHC0 Mar 25 01:37:30.653590 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:37:30.661685 (dockerd)[2231]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:37:30.665270 kubelet[2223]: E0325 01:37:30.665227 2223 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:37:30.667664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:37:30.667858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:37:30.668233 systemd[1]: kubelet.service: Consumed 167ms CPU time, 98.2M memory peak. Mar 25 01:37:31.962005 dockerd[2231]: time="2025-03-25T01:37:31.961945425Z" level=info msg="Starting up" Mar 25 01:37:31.963514 dockerd[2231]: time="2025-03-25T01:37:31.963481825Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:37:32.004971 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2835966087-merged.mount: Deactivated successfully. Mar 25 01:37:32.072812 dockerd[2231]: time="2025-03-25T01:37:32.072768416Z" level=info msg="Loading containers: start." Mar 25 01:37:32.338426 kernel: Initializing XFRM netlink socket Mar 25 01:37:32.464972 systemd-networkd[1560]: docker0: Link UP Mar 25 01:37:32.526421 dockerd[2231]: time="2025-03-25T01:37:32.526378978Z" level=info msg="Loading containers: done." Mar 25 01:37:32.545329 dockerd[2231]: time="2025-03-25T01:37:32.545266577Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:37:32.545508 dockerd[2231]: time="2025-03-25T01:37:32.545385077Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:37:32.545566 dockerd[2231]: time="2025-03-25T01:37:32.545505577Z" level=info msg="Daemon has completed initialization" Mar 25 01:37:32.592102 dockerd[2231]: time="2025-03-25T01:37:32.592045375Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:37:32.592222 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:37:37.228826 containerd[1729]: time="2025-03-25T01:37:37.228770875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 25 01:37:37.882213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2951272337.mount: Deactivated successfully. Mar 25 01:37:40.457500 containerd[1729]: time="2025-03-25T01:37:40.457445639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:40.460247 containerd[1729]: time="2025-03-25T01:37:40.460179086Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674581" Mar 25 01:37:40.463931 containerd[1729]: time="2025-03-25T01:37:40.463874150Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:40.468081 containerd[1729]: time="2025-03-25T01:37:40.468043121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:40.469137 containerd[1729]: time="2025-03-25T01:37:40.468958037Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 3.240134562s" Mar 25 01:37:40.469137 containerd[1729]: time="2025-03-25T01:37:40.468998838Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 25 01:37:40.486489 containerd[1729]: time="2025-03-25T01:37:40.486449737Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 25 01:37:40.682265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 25 01:37:40.684709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:40.905892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:40.912581 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:37:41.427866 kubelet[2504]: E0325 01:37:41.427800 2504 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:37:41.430563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:37:41.430792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:37:41.431235 systemd[1]: kubelet.service: Consumed 156ms CPU time, 94.5M memory peak. Mar 25 01:37:43.779937 containerd[1729]: time="2025-03-25T01:37:43.779886903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:43.783196 containerd[1729]: time="2025-03-25T01:37:43.783035421Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619780" Mar 25 01:37:43.786310 containerd[1729]: time="2025-03-25T01:37:43.786248240Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:43.790668 containerd[1729]: time="2025-03-25T01:37:43.790620965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:43.791633 containerd[1729]: time="2025-03-25T01:37:43.791479870Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 3.304927031s" Mar 25 01:37:43.791633 containerd[1729]: time="2025-03-25T01:37:43.791518070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 25 01:37:43.810661 containerd[1729]: time="2025-03-25T01:37:43.810626481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 25 01:37:45.641757 containerd[1729]: time="2025-03-25T01:37:45.641702668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:45.643934 containerd[1729]: time="2025-03-25T01:37:45.643856281Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903317" Mar 25 01:37:45.647078 containerd[1729]: time="2025-03-25T01:37:45.647017699Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:45.653688 containerd[1729]: time="2025-03-25T01:37:45.653626237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:45.654663 containerd[1729]: time="2025-03-25T01:37:45.654518942Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.843854261s" Mar 25 01:37:45.654663 containerd[1729]: time="2025-03-25T01:37:45.654555443Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 25 01:37:45.672393 containerd[1729]: time="2025-03-25T01:37:45.672357346Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 25 01:37:46.774337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249940895.mount: Deactivated successfully. Mar 25 01:37:47.316040 containerd[1729]: time="2025-03-25T01:37:47.315989762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:47.318379 containerd[1729]: time="2025-03-25T01:37:47.318216287Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185380" Mar 25 01:37:47.322865 containerd[1729]: time="2025-03-25T01:37:47.322019128Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:47.326163 containerd[1729]: time="2025-03-25T01:37:47.326103072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:47.327058 containerd[1729]: time="2025-03-25T01:37:47.326650878Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.654256832s" Mar 25 01:37:47.327058 containerd[1729]: time="2025-03-25T01:37:47.326687379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 25 01:37:47.343410 containerd[1729]: time="2025-03-25T01:37:47.343370160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:37:47.932318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927870642.mount: Deactivated successfully. Mar 25 01:37:49.171232 containerd[1729]: time="2025-03-25T01:37:49.171177443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:49.174007 containerd[1729]: time="2025-03-25T01:37:49.173940873Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 25 01:37:49.177110 containerd[1729]: time="2025-03-25T01:37:49.177053707Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:49.182296 containerd[1729]: time="2025-03-25T01:37:49.182254063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:49.183297 containerd[1729]: time="2025-03-25T01:37:49.183126773Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.839721213s" Mar 25 01:37:49.183297 containerd[1729]: time="2025-03-25T01:37:49.183164873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 01:37:49.200267 containerd[1729]: time="2025-03-25T01:37:49.200228859Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 25 01:37:49.662174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410362295.mount: Deactivated successfully. Mar 25 01:37:49.685109 containerd[1729]: time="2025-03-25T01:37:49.685062133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:49.687488 containerd[1729]: time="2025-03-25T01:37:49.687423659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Mar 25 01:37:49.691517 containerd[1729]: time="2025-03-25T01:37:49.691457103Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:49.696017 containerd[1729]: time="2025-03-25T01:37:49.695966652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:49.696612 containerd[1729]: time="2025-03-25T01:37:49.696577658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 496.301599ms" Mar 25 01:37:49.696697 containerd[1729]: time="2025-03-25T01:37:49.696619059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 25 01:37:49.713421 containerd[1729]: time="2025-03-25T01:37:49.713386341Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 25 01:37:50.290887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275792200.mount: Deactivated successfully. Mar 25 01:37:50.971178 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 25 01:37:51.432346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 25 01:37:51.434508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:51.563577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:51.575872 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:37:51.622147 kubelet[2661]: E0325 01:37:51.622093 2661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:37:51.624434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:37:51.624627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:37:51.625035 systemd[1]: kubelet.service: Consumed 152ms CPU time, 94.3M memory peak. Mar 25 01:37:52.255957 update_engine[1709]: I20250325 01:37:52.255867 1709 update_attempter.cc:509] Updating boot flags... Mar 25 01:37:52.594386 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2689) Mar 25 01:37:52.846380 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2675) Mar 25 01:37:53.982678 containerd[1729]: time="2025-03-25T01:37:53.982626730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:53.985135 containerd[1729]: time="2025-03-25T01:37:53.985067246Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Mar 25 01:37:53.987684 containerd[1729]: time="2025-03-25T01:37:53.987629763Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:53.992085 containerd[1729]: time="2025-03-25T01:37:53.992033792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:37:53.993380 containerd[1729]: time="2025-03-25T01:37:53.992936797Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.279513656s" Mar 25 01:37:53.993380 containerd[1729]: time="2025-03-25T01:37:53.992974198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 25 01:37:56.966877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:56.967108 systemd[1]: kubelet.service: Consumed 152ms CPU time, 94.3M memory peak. Mar 25 01:37:56.970112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:56.998379 systemd[1]: Reload requested from client PID 2872 ('systemctl') (unit session-9.scope)... Mar 25 01:37:56.998396 systemd[1]: Reloading... Mar 25 01:37:57.159314 zram_generator::config[2924]: No configuration found. Mar 25 01:37:57.270483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:37:57.386162 systemd[1]: Reloading finished in 387 ms. Mar 25 01:37:57.440854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:57.446026 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:57.447767 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:37:57.447996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:57.448046 systemd[1]: kubelet.service: Consumed 121ms CPU time, 83.6M memory peak. Mar 25 01:37:57.449660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:37:57.698100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:37:57.706621 (kubelet)[2990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:37:57.745568 kubelet[2990]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:37:57.745904 kubelet[2990]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:37:57.745904 kubelet[2990]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:37:57.746027 kubelet[2990]: I0325 01:37:57.745988 2990 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:37:58.075455 kubelet[2990]: I0325 01:37:58.075346 2990 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:37:58.075455 kubelet[2990]: I0325 01:37:58.075374 2990 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:37:58.076059 kubelet[2990]: I0325 01:37:58.076009 2990 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:37:58.385432 kubelet[2990]: I0325 01:37:58.384754 2990 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:37:58.386535 kubelet[2990]: E0325 01:37:58.386451 2990 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.395166 kubelet[2990]: I0325 01:37:58.395128 2990 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:37:58.395445 kubelet[2990]: I0325 01:37:58.395403 2990 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:37:58.395628 kubelet[2990]: I0325 01:37:58.395439 2990 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-b8cd1bf009","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:37:58.424044 kubelet[2990]: I0325 01:37:58.423994 2990 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:37:58.424044 kubelet[2990]: I0325 01:37:58.424050 2990 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:37:58.424256 kubelet[2990]: I0325 01:37:58.424240 2990 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:37:58.425265 kubelet[2990]: I0325 01:37:58.425239 2990 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:37:58.425265 kubelet[2990]: I0325 01:37:58.425266 2990 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:37:58.425630 kubelet[2990]: I0325 01:37:58.425326 2990 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:37:58.425630 kubelet[2990]: I0325 01:37:58.425348 2990 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:37:58.430895 kubelet[2990]: I0325 01:37:58.430870 2990 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:37:58.433915 kubelet[2990]: I0325 01:37:58.432593 2990 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:37:58.433915 kubelet[2990]: W0325 01:37:58.432670 2990 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:37:58.433915 kubelet[2990]: I0325 01:37:58.433317 2990 server.go:1264] "Started kubelet" Mar 25 01:37:58.433915 kubelet[2990]: W0325 01:37:58.433499 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.433915 kubelet[2990]: E0325 01:37:58.433570 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.435874 kubelet[2990]: W0325 01:37:58.435761 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-b8cd1bf009&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.435874 kubelet[2990]: E0325 01:37:58.435827 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-b8cd1bf009&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.436906 kubelet[2990]: I0325 01:37:58.435925 2990 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:37:58.438102 kubelet[2990]: I0325 01:37:58.437637 2990 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:37:58.439693 kubelet[2990]: I0325 01:37:58.439107 2990 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:37:58.439693 kubelet[2990]: I0325 01:37:58.439428 2990 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:37:58.439693 kubelet[2990]: E0325 01:37:58.439578 2990 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-a-b8cd1bf009.182fe8028ae1451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-b8cd1bf009,UID:ci-4284.0.0-a-b8cd1bf009,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-b8cd1bf009,},FirstTimestamp:2025-03-25 01:37:58.433269022 +0000 UTC m=+0.723340362,LastTimestamp:2025-03-25 01:37:58.433269022 +0000 UTC m=+0.723340362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-b8cd1bf009,}" Mar 25 01:37:58.440427 kubelet[2990]: I0325 01:37:58.440409 2990 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:37:58.446732 kubelet[2990]: E0325 01:37:58.446710 2990 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:37:58.446989 kubelet[2990]: E0325 01:37:58.446924 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:37:58.447296 kubelet[2990]: I0325 01:37:58.447131 2990 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:37:58.447296 kubelet[2990]: I0325 01:37:58.447245 2990 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:37:58.447452 kubelet[2990]: I0325 01:37:58.447440 2990 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:37:58.448329 kubelet[2990]: W0325 01:37:58.447936 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.448416 kubelet[2990]: E0325 01:37:58.448350 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.450106 kubelet[2990]: E0325 01:37:58.448715 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-b8cd1bf009?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Mar 25 01:37:58.453488 kubelet[2990]: I0325 01:37:58.453462 2990 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:37:58.453599 kubelet[2990]: I0325 01:37:58.453590 2990 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:37:58.453739 kubelet[2990]: I0325 01:37:58.453723 2990 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:37:58.480123 kubelet[2990]: I0325 01:37:58.480092 2990 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:37:58.480123 kubelet[2990]: I0325 01:37:58.480112 2990 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:37:58.480404 kubelet[2990]: I0325 01:37:58.480132 2990 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:37:58.486522 kubelet[2990]: I0325 01:37:58.486490 2990 policy_none.go:49] "None policy: Start" Mar 25 01:37:58.487617 kubelet[2990]: I0325 01:37:58.487298 2990 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:37:58.487617 kubelet[2990]: I0325 01:37:58.487321 2990 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:37:58.490621 kubelet[2990]: I0325 01:37:58.490445 2990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:37:58.492137 kubelet[2990]: I0325 01:37:58.492117 2990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:37:58.492231 kubelet[2990]: I0325 01:37:58.492222 2990 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:37:58.492583 kubelet[2990]: I0325 01:37:58.492332 2990 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:37:58.492583 kubelet[2990]: E0325 01:37:58.492382 2990 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:37:58.496754 kubelet[2990]: W0325 01:37:58.496650 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.496876 kubelet[2990]: E0325 01:37:58.496863 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:58.502881 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:37:58.511183 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:37:58.514317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:37:58.525011 kubelet[2990]: I0325 01:37:58.524984 2990 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:37:58.525011 kubelet[2990]: I0325 01:37:58.525208 2990 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:37:58.525011 kubelet[2990]: I0325 01:37:58.525343 2990 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:37:58.527949 kubelet[2990]: E0325 01:37:58.527926 2990 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:37:58.550495 kubelet[2990]: I0325 01:37:58.550463 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.550843 kubelet[2990]: E0325 01:37:58.550807 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.593213 kubelet[2990]: I0325 01:37:58.593095 2990 topology_manager.go:215] "Topology Admit Handler" podUID="ababc6902ecac1eabd2e9f848bf37bc3" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.595031 kubelet[2990]: I0325 01:37:58.594996 2990 topology_manager.go:215] "Topology Admit Handler" podUID="d767f58992726f4c2a8221c29df19f92" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.602125 systemd[1]: Created slice kubepods-burstable-podababc6902ecac1eabd2e9f848bf37bc3.slice - libcontainer container kubepods-burstable-podababc6902ecac1eabd2e9f848bf37bc3.slice. Mar 25 01:37:58.627086 kubelet[2990]: I0325 01:37:58.627021 2990 topology_manager.go:215] "Topology Admit Handler" podUID="458f70811f5391276f3eb90a627d048e" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.636306 systemd[1]: Created slice kubepods-burstable-podd767f58992726f4c2a8221c29df19f92.slice - libcontainer container kubepods-burstable-podd767f58992726f4c2a8221c29df19f92.slice. Mar 25 01:37:58.643090 systemd[1]: Created slice kubepods-burstable-pod458f70811f5391276f3eb90a627d048e.slice - libcontainer container kubepods-burstable-pod458f70811f5391276f3eb90a627d048e.slice. Mar 25 01:37:58.648240 kubelet[2990]: I0325 01:37:58.648211 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648240 kubelet[2990]: I0325 01:37:58.648249 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648540 kubelet[2990]: I0325 01:37:58.648288 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ababc6902ecac1eabd2e9f848bf37bc3-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-b8cd1bf009\" (UID: \"ababc6902ecac1eabd2e9f848bf37bc3\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648540 kubelet[2990]: I0325 01:37:58.648323 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d767f58992726f4c2a8221c29df19f92-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-b8cd1bf009\" (UID: \"d767f58992726f4c2a8221c29df19f92\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648540 kubelet[2990]: I0325 01:37:58.648345 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d767f58992726f4c2a8221c29df19f92-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-b8cd1bf009\" (UID: \"d767f58992726f4c2a8221c29df19f92\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648540 kubelet[2990]: I0325 01:37:58.648368 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d767f58992726f4c2a8221c29df19f92-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-b8cd1bf009\" (UID: \"d767f58992726f4c2a8221c29df19f92\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648540 kubelet[2990]: I0325 01:37:58.648390 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648668 kubelet[2990]: I0325 01:37:58.648422 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.648668 kubelet[2990]: I0325 01:37:58.648458 2990 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.649376 kubelet[2990]: E0325 01:37:58.649343 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-b8cd1bf009?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Mar 25 01:37:58.753540 kubelet[2990]: I0325 01:37:58.753508 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.754080 kubelet[2990]: E0325 01:37:58.753885 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:58.909413 containerd[1729]: time="2025-03-25T01:37:58.909256914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-b8cd1bf009,Uid:ababc6902ecac1eabd2e9f848bf37bc3,Namespace:kube-system,Attempt:0,}" Mar 25 01:37:58.941448 containerd[1729]: time="2025-03-25T01:37:58.941394970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-b8cd1bf009,Uid:d767f58992726f4c2a8221c29df19f92,Namespace:kube-system,Attempt:0,}" Mar 25 01:37:58.947006 containerd[1729]: time="2025-03-25T01:37:58.946946514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-b8cd1bf009,Uid:458f70811f5391276f3eb90a627d048e,Namespace:kube-system,Attempt:0,}" Mar 25 01:37:59.049963 kubelet[2990]: E0325 01:37:59.049903 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-b8cd1bf009?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Mar 25 01:37:59.156635 kubelet[2990]: I0325 01:37:59.156598 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:59.157041 kubelet[2990]: E0325 01:37:59.157001 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:59.677446 kubelet[2990]: W0325 01:37:59.677369 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-b8cd1bf009&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.677446 kubelet[2990]: E0325 01:37:59.677445 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-b8cd1bf009&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.688806 kubelet[2990]: W0325 01:37:59.688748 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.688806 kubelet[2990]: E0325 01:37:59.688812 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.792907 kubelet[2990]: W0325 01:37:59.792859 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.792907 kubelet[2990]: E0325 01:37:59.792913 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.850696 kubelet[2990]: E0325 01:37:59.850636 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-b8cd1bf009?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Mar 25 01:37:59.932241 kubelet[2990]: W0325 01:37:59.931852 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.932241 kubelet[2990]: E0325 01:37:59.931943 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:37:59.959202 kubelet[2990]: I0325 01:37:59.959176 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:37:59.959577 kubelet[2990]: E0325 01:37:59.959533 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:00.561844 kubelet[2990]: E0325 01:38:00.561805 2990 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:01.122892 kubelet[2990]: E0325 01:38:01.122759 2990 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-a-b8cd1bf009.182fe8028ae1451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-b8cd1bf009,UID:ci-4284.0.0-a-b8cd1bf009,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-b8cd1bf009,},FirstTimestamp:2025-03-25 01:37:58.433269022 +0000 UTC m=+0.723340362,LastTimestamp:2025-03-25 01:37:58.433269022 +0000 UTC m=+0.723340362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-b8cd1bf009,}" Mar 25 01:38:01.451772 kubelet[2990]: E0325 01:38:01.451712 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-b8cd1bf009?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="3.2s" Mar 25 01:38:01.562227 kubelet[2990]: I0325 01:38:01.562185 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:01.562636 kubelet[2990]: E0325 01:38:01.562596 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:01.911539 kubelet[2990]: W0325 01:38:01.911402 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:01.911539 kubelet[2990]: E0325 01:38:01.911455 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:01.920807 kubelet[2990]: W0325 01:38:01.920772 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:01.920807 kubelet[2990]: E0325 01:38:01.920812 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:02.150542 kubelet[2990]: W0325 01:38:02.150495 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:02.150542 kubelet[2990]: E0325 01:38:02.150551 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:02.493612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006165035.mount: Deactivated successfully. Mar 25 01:38:02.516218 containerd[1729]: time="2025-03-25T01:38:02.516167745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:02.529739 containerd[1729]: time="2025-03-25T01:38:02.529585052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 25 01:38:02.534074 containerd[1729]: time="2025-03-25T01:38:02.534041487Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:02.537356 containerd[1729]: time="2025-03-25T01:38:02.537324613Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:02.544181 containerd[1729]: time="2025-03-25T01:38:02.543910266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:38:02.547376 containerd[1729]: time="2025-03-25T01:38:02.547333793Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:02.551706 containerd[1729]: time="2025-03-25T01:38:02.551670528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:02.552367 containerd[1729]: time="2025-03-25T01:38:02.552301533Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.759341ms" Mar 25 01:38:02.553523 containerd[1729]: time="2025-03-25T01:38:02.553438842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:38:02.557022 containerd[1729]: time="2025-03-25T01:38:02.556988570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 616.207709ms" Mar 25 01:38:02.574464 containerd[1729]: time="2025-03-25T01:38:02.574429709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 640.111299ms" Mar 25 01:38:02.596990 containerd[1729]: time="2025-03-25T01:38:02.596794487Z" level=info msg="connecting to shim cfeb04c1cf66638781032b0bd0f28faa2ad57996fe532a52251f0940e59d8aab" address="unix:///run/containerd/s/f060e4c08321b2e785b618ab952fde29cf3bb4f297efc9f5b3cae71c3a6ae04a" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:02.623448 systemd[1]: Started cri-containerd-cfeb04c1cf66638781032b0bd0f28faa2ad57996fe532a52251f0940e59d8aab.scope - libcontainer container cfeb04c1cf66638781032b0bd0f28faa2ad57996fe532a52251f0940e59d8aab. Mar 25 01:38:02.648127 containerd[1729]: time="2025-03-25T01:38:02.646492806Z" level=info msg="connecting to shim 079b094c5f9a974396037bb422e0d744a520a8dfb7cb258de79360203ba5658a" address="unix:///run/containerd/s/fd83e1a34bb09e25467dea3febc0f440fe3b9dc33abb38f23e46326f4e1b5034" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:02.660855 containerd[1729]: time="2025-03-25T01:38:02.660801470Z" level=info msg="connecting to shim aaf11261dbb63bb0b1b96ac30b9d7e5ed33839f19c405cd121d01ce39d99e12f" address="unix:///run/containerd/s/4996ac3c4e678deb3e2794fb8dd094466b06ca370b2cb7896f0e9a99df87edc2" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:02.688501 systemd[1]: Started cri-containerd-079b094c5f9a974396037bb422e0d744a520a8dfb7cb258de79360203ba5658a.scope - libcontainer container 079b094c5f9a974396037bb422e0d744a520a8dfb7cb258de79360203ba5658a. Mar 25 01:38:02.710944 systemd[1]: Started cri-containerd-aaf11261dbb63bb0b1b96ac30b9d7e5ed33839f19c405cd121d01ce39d99e12f.scope - libcontainer container aaf11261dbb63bb0b1b96ac30b9d7e5ed33839f19c405cd121d01ce39d99e12f. Mar 25 01:38:02.727163 containerd[1729]: time="2025-03-25T01:38:02.726641026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-b8cd1bf009,Uid:458f70811f5391276f3eb90a627d048e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfeb04c1cf66638781032b0bd0f28faa2ad57996fe532a52251f0940e59d8aab\"" Mar 25 01:38:02.734837 containerd[1729]: time="2025-03-25T01:38:02.734764920Z" level=info msg="CreateContainer within sandbox \"cfeb04c1cf66638781032b0bd0f28faa2ad57996fe532a52251f0940e59d8aab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:38:02.759020 containerd[1729]: time="2025-03-25T01:38:02.758769895Z" level=info msg="Container 222303e2279c8a050c5fc29961aff8c7372c9bc75cb8e3712641548f43216a8f: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:02.770112 containerd[1729]: time="2025-03-25T01:38:02.770072825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-b8cd1bf009,Uid:d767f58992726f4c2a8221c29df19f92,Namespace:kube-system,Attempt:0,} returns sandbox id \"079b094c5f9a974396037bb422e0d744a520a8dfb7cb258de79360203ba5658a\"" Mar 25 01:38:02.778030 containerd[1729]: time="2025-03-25T01:38:02.777117306Z" level=info msg="CreateContainer within sandbox \"079b094c5f9a974396037bb422e0d744a520a8dfb7cb258de79360203ba5658a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:38:02.784694 containerd[1729]: time="2025-03-25T01:38:02.784664393Z" level=info msg="CreateContainer within sandbox \"cfeb04c1cf66638781032b0bd0f28faa2ad57996fe532a52251f0940e59d8aab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"222303e2279c8a050c5fc29961aff8c7372c9bc75cb8e3712641548f43216a8f\"" Mar 25 01:38:02.785447 containerd[1729]: time="2025-03-25T01:38:02.785421701Z" level=info msg="StartContainer for \"222303e2279c8a050c5fc29961aff8c7372c9bc75cb8e3712641548f43216a8f\"" Mar 25 01:38:02.786529 containerd[1729]: time="2025-03-25T01:38:02.786505814Z" level=info msg="connecting to shim 222303e2279c8a050c5fc29961aff8c7372c9bc75cb8e3712641548f43216a8f" address="unix:///run/containerd/s/f060e4c08321b2e785b618ab952fde29cf3bb4f297efc9f5b3cae71c3a6ae04a" protocol=ttrpc version=3 Mar 25 01:38:02.792512 containerd[1729]: time="2025-03-25T01:38:02.792476882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-b8cd1bf009,Uid:ababc6902ecac1eabd2e9f848bf37bc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf11261dbb63bb0b1b96ac30b9d7e5ed33839f19c405cd121d01ce39d99e12f\"" Mar 25 01:38:02.795965 containerd[1729]: time="2025-03-25T01:38:02.795926122Z" level=info msg="CreateContainer within sandbox \"aaf11261dbb63bb0b1b96ac30b9d7e5ed33839f19c405cd121d01ce39d99e12f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:38:02.804159 kubelet[2990]: W0325 01:38:02.804113 2990 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-b8cd1bf009&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:02.804159 kubelet[2990]: E0325 01:38:02.804161 2990 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-b8cd1bf009&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Mar 25 01:38:02.807447 systemd[1]: Started cri-containerd-222303e2279c8a050c5fc29961aff8c7372c9bc75cb8e3712641548f43216a8f.scope - libcontainer container 222303e2279c8a050c5fc29961aff8c7372c9bc75cb8e3712641548f43216a8f. Mar 25 01:38:02.813847 containerd[1729]: time="2025-03-25T01:38:02.812855816Z" level=info msg="Container 20e61a4e6e4cf528eec0e6025531afa8be6c1252c839a249a31b999ad05cd216: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:02.828472 containerd[1729]: time="2025-03-25T01:38:02.828413195Z" level=info msg="CreateContainer within sandbox \"079b094c5f9a974396037bb422e0d744a520a8dfb7cb258de79360203ba5658a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"20e61a4e6e4cf528eec0e6025531afa8be6c1252c839a249a31b999ad05cd216\"" Mar 25 01:38:02.829443 containerd[1729]: time="2025-03-25T01:38:02.829419706Z" level=info msg="StartContainer for \"20e61a4e6e4cf528eec0e6025531afa8be6c1252c839a249a31b999ad05cd216\"" Mar 25 01:38:02.830775 containerd[1729]: time="2025-03-25T01:38:02.830746022Z" level=info msg="connecting to shim 20e61a4e6e4cf528eec0e6025531afa8be6c1252c839a249a31b999ad05cd216" address="unix:///run/containerd/s/fd83e1a34bb09e25467dea3febc0f440fe3b9dc33abb38f23e46326f4e1b5034" protocol=ttrpc version=3 Mar 25 01:38:02.831050 containerd[1729]: time="2025-03-25T01:38:02.830858023Z" level=info msg="Container 824c6adba9fc70a7a08990417409af1a6afc2b4b7fee1cdc5b81646152366773: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:02.852576 systemd[1]: Started cri-containerd-20e61a4e6e4cf528eec0e6025531afa8be6c1252c839a249a31b999ad05cd216.scope - libcontainer container 20e61a4e6e4cf528eec0e6025531afa8be6c1252c839a249a31b999ad05cd216. Mar 25 01:38:02.862472 containerd[1729]: time="2025-03-25T01:38:02.862430485Z" level=info msg="CreateContainer within sandbox \"aaf11261dbb63bb0b1b96ac30b9d7e5ed33839f19c405cd121d01ce39d99e12f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"824c6adba9fc70a7a08990417409af1a6afc2b4b7fee1cdc5b81646152366773\"" Mar 25 01:38:02.864710 containerd[1729]: time="2025-03-25T01:38:02.864513609Z" level=info msg="StartContainer for \"824c6adba9fc70a7a08990417409af1a6afc2b4b7fee1cdc5b81646152366773\"" Mar 25 01:38:02.866293 containerd[1729]: time="2025-03-25T01:38:02.866242029Z" level=info msg="connecting to shim 824c6adba9fc70a7a08990417409af1a6afc2b4b7fee1cdc5b81646152366773" address="unix:///run/containerd/s/4996ac3c4e678deb3e2794fb8dd094466b06ca370b2cb7896f0e9a99df87edc2" protocol=ttrpc version=3 Mar 25 01:38:02.882329 containerd[1729]: time="2025-03-25T01:38:02.882182812Z" level=info msg="StartContainer for \"222303e2279c8a050c5fc29961aff8c7372c9bc75cb8e3712641548f43216a8f\" returns successfully" Mar 25 01:38:02.903427 systemd[1]: Started cri-containerd-824c6adba9fc70a7a08990417409af1a6afc2b4b7fee1cdc5b81646152366773.scope - libcontainer container 824c6adba9fc70a7a08990417409af1a6afc2b4b7fee1cdc5b81646152366773. Mar 25 01:38:02.953595 containerd[1729]: time="2025-03-25T01:38:02.953538731Z" level=info msg="StartContainer for \"20e61a4e6e4cf528eec0e6025531afa8be6c1252c839a249a31b999ad05cd216\" returns successfully" Mar 25 01:38:03.091321 containerd[1729]: time="2025-03-25T01:38:03.090437003Z" level=info msg="StartContainer for \"824c6adba9fc70a7a08990417409af1a6afc2b4b7fee1cdc5b81646152366773\" returns successfully" Mar 25 01:38:04.766117 kubelet[2990]: I0325 01:38:04.766080 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:05.263753 kubelet[2990]: E0325 01:38:05.263685 2990 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-a-b8cd1bf009\" not found" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:05.387302 kubelet[2990]: I0325 01:38:05.386765 2990 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:05.415198 kubelet[2990]: E0325 01:38:05.415163 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:05.516399 kubelet[2990]: E0325 01:38:05.516039 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:05.616673 kubelet[2990]: E0325 01:38:05.616638 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:05.717247 kubelet[2990]: E0325 01:38:05.717195 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:05.818046 kubelet[2990]: E0325 01:38:05.817897 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:05.918851 kubelet[2990]: E0325 01:38:05.918753 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.019775 kubelet[2990]: E0325 01:38:06.019717 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.120474 kubelet[2990]: E0325 01:38:06.120341 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.221032 kubelet[2990]: E0325 01:38:06.220964 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.321568 kubelet[2990]: E0325 01:38:06.321522 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.422573 kubelet[2990]: E0325 01:38:06.422450 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.523814 kubelet[2990]: E0325 01:38:06.523470 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.624378 kubelet[2990]: E0325 01:38:06.624319 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.725018 kubelet[2990]: E0325 01:38:06.724975 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.825623 kubelet[2990]: E0325 01:38:06.825571 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:06.925750 kubelet[2990]: E0325 01:38:06.925676 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.026977 kubelet[2990]: E0325 01:38:07.026773 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.127587 kubelet[2990]: E0325 01:38:07.127498 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.228382 kubelet[2990]: E0325 01:38:07.228326 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.307246 systemd[1]: Reload requested from client PID 3259 ('systemctl') (unit session-9.scope)... Mar 25 01:38:07.307266 systemd[1]: Reloading... Mar 25 01:38:07.328977 kubelet[2990]: E0325 01:38:07.328933 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.415368 zram_generator::config[3307]: No configuration found. Mar 25 01:38:07.429582 kubelet[2990]: E0325 01:38:07.429545 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.530602 kubelet[2990]: E0325 01:38:07.530554 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.548847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:38:07.631960 kubelet[2990]: E0325 01:38:07.631565 2990 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-b8cd1bf009\" not found" Mar 25 01:38:07.679753 systemd[1]: Reloading finished in 371 ms. Mar 25 01:38:07.708809 kubelet[2990]: I0325 01:38:07.708735 2990 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:38:07.709314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:07.725585 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:38:07.725862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:07.725929 systemd[1]: kubelet.service: Consumed 762ms CPU time, 115.5M memory peak. Mar 25 01:38:07.727922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:07.847976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:07.857638 (kubelet)[3373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:38:07.898159 kubelet[3373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:38:07.898159 kubelet[3373]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:38:07.898159 kubelet[3373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:38:07.898159 kubelet[3373]: I0325 01:38:07.897807 3373 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:38:07.902216 kubelet[3373]: I0325 01:38:07.902185 3373 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:38:07.902216 kubelet[3373]: I0325 01:38:07.902205 3373 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:38:07.902451 kubelet[3373]: I0325 01:38:07.902422 3373 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:38:07.903612 kubelet[3373]: I0325 01:38:07.903574 3373 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:38:07.906174 kubelet[3373]: I0325 01:38:07.906149 3373 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:38:07.913317 kubelet[3373]: I0325 01:38:07.913295 3373 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:38:07.913580 kubelet[3373]: I0325 01:38:07.913537 3373 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:38:07.913755 kubelet[3373]: I0325 01:38:07.913576 3373 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-b8cd1bf009","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:38:07.913896 kubelet[3373]: I0325 01:38:07.913771 3373 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:38:07.913896 kubelet[3373]: I0325 01:38:07.913788 3373 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:38:07.913896 kubelet[3373]: I0325 01:38:07.913843 3373 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:38:07.914011 kubelet[3373]: I0325 01:38:07.913957 3373 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:38:07.914011 kubelet[3373]: I0325 01:38:07.913971 3373 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:38:07.915074 kubelet[3373]: I0325 01:38:07.914124 3373 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:38:07.915074 kubelet[3373]: I0325 01:38:07.914148 3373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:38:07.921300 kubelet[3373]: I0325 01:38:07.916577 3373 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:38:07.921300 kubelet[3373]: I0325 01:38:07.916760 3373 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:38:07.921300 kubelet[3373]: I0325 01:38:07.917206 3373 server.go:1264] "Started kubelet" Mar 25 01:38:07.924396 kubelet[3373]: I0325 01:38:07.922915 3373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:38:07.929353 kubelet[3373]: I0325 01:38:07.929325 3373 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:38:07.930611 kubelet[3373]: I0325 01:38:07.930592 3373 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:38:07.931743 kubelet[3373]: I0325 01:38:07.931700 3373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:38:07.932027 kubelet[3373]: I0325 01:38:07.932009 3373 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:38:07.933968 kubelet[3373]: I0325 01:38:07.933941 3373 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:38:07.934451 kubelet[3373]: I0325 01:38:07.934436 3373 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:38:07.934692 kubelet[3373]: I0325 01:38:07.934679 3373 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:38:07.939123 kubelet[3373]: I0325 01:38:07.939081 3373 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:38:07.947255 kubelet[3373]: I0325 01:38:07.947228 3373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:38:07.949971 kubelet[3373]: E0325 01:38:07.949950 3373 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:38:07.951366 kubelet[3373]: I0325 01:38:07.951161 3373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:38:07.951447 kubelet[3373]: I0325 01:38:07.951376 3373 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:38:07.951496 kubelet[3373]: I0325 01:38:07.951448 3373 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:38:07.951537 kubelet[3373]: E0325 01:38:07.951522 3373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:38:07.953590 kubelet[3373]: I0325 01:38:07.953504 3373 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:38:07.953590 kubelet[3373]: I0325 01:38:07.953531 3373 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:38:07.996578 kubelet[3373]: I0325 01:38:07.996546 3373 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:38:07.996578 kubelet[3373]: I0325 01:38:07.996562 3373 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:38:07.996578 kubelet[3373]: I0325 01:38:07.996583 3373 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:38:07.996829 kubelet[3373]: I0325 01:38:07.996748 3373 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:38:07.996829 kubelet[3373]: I0325 01:38:07.996761 3373 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:38:07.996829 kubelet[3373]: I0325 01:38:07.996789 3373 policy_none.go:49] "None policy: Start" Mar 25 01:38:07.997425 kubelet[3373]: I0325 01:38:07.997406 3373 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:38:07.997542 kubelet[3373]: I0325 01:38:07.997435 3373 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:38:07.997600 kubelet[3373]: I0325 01:38:07.997589 3373 state_mem.go:75] "Updated machine memory state" Mar 25 01:38:08.001660 kubelet[3373]: I0325 01:38:08.001621 3373 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:38:08.001843 kubelet[3373]: I0325 01:38:08.001798 3373 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:38:08.001987 kubelet[3373]: I0325 01:38:08.001912 3373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:38:08.037384 kubelet[3373]: I0325 01:38:08.037356 3373 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.046488 kubelet[3373]: I0325 01:38:08.046462 3373 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.046645 kubelet[3373]: I0325 01:38:08.046547 3373 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.052265 kubelet[3373]: I0325 01:38:08.052214 3373 topology_manager.go:215] "Topology Admit Handler" podUID="ababc6902ecac1eabd2e9f848bf37bc3" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.052419 kubelet[3373]: I0325 01:38:08.052395 3373 topology_manager.go:215] "Topology Admit Handler" podUID="d767f58992726f4c2a8221c29df19f92" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.052512 kubelet[3373]: I0325 01:38:08.052491 3373 topology_manager.go:215] "Topology Admit Handler" podUID="458f70811f5391276f3eb90a627d048e" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.427706 kubelet[3373]: I0325 01:38:08.427678 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.430082 kubelet[3373]: I0325 01:38:08.430057 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.432486 kubelet[3373]: I0325 01:38:08.430341 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.432486 kubelet[3373]: I0325 01:38:08.430372 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ababc6902ecac1eabd2e9f848bf37bc3-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-b8cd1bf009\" (UID: \"ababc6902ecac1eabd2e9f848bf37bc3\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.432486 kubelet[3373]: I0325 01:38:08.430398 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d767f58992726f4c2a8221c29df19f92-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-b8cd1bf009\" (UID: \"d767f58992726f4c2a8221c29df19f92\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.432486 kubelet[3373]: I0325 01:38:08.430429 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.432486 kubelet[3373]: I0325 01:38:08.430454 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458f70811f5391276f3eb90a627d048e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-b8cd1bf009\" (UID: \"458f70811f5391276f3eb90a627d048e\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.432745 kubelet[3373]: I0325 01:38:08.430476 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d767f58992726f4c2a8221c29df19f92-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-b8cd1bf009\" (UID: \"d767f58992726f4c2a8221c29df19f92\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.432745 kubelet[3373]: I0325 01:38:08.430497 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d767f58992726f4c2a8221c29df19f92-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-b8cd1bf009\" (UID: \"d767f58992726f4c2a8221c29df19f92\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:08.442307 kubelet[3373]: W0325 01:38:08.440400 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:38:08.442307 kubelet[3373]: W0325 01:38:08.440143 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:38:08.442307 kubelet[3373]: W0325 01:38:08.440966 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:38:08.521426 sudo[3406]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:38:08.521818 sudo[3406]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:38:08.916110 kubelet[3373]: I0325 01:38:08.915513 3373 apiserver.go:52] "Watching apiserver" Mar 25 01:38:08.934710 kubelet[3373]: I0325 01:38:08.934650 3373 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:38:08.988229 kubelet[3373]: W0325 01:38:08.988189 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:38:08.988407 kubelet[3373]: E0325 01:38:08.988272 3373 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-a-b8cd1bf009\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" Mar 25 01:38:09.021001 kubelet[3373]: I0325 01:38:09.020352 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-a-b8cd1bf009" podStartSLOduration=1.020330184 podStartE2EDuration="1.020330184s" podCreationTimestamp="2025-03-25 01:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:09.019629976 +0000 UTC m=+1.158377500" watchObservedRunningTime="2025-03-25 01:38:09.020330184 +0000 UTC m=+1.159077708" Mar 25 01:38:09.021001 kubelet[3373]: I0325 01:38:09.020474 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-a-b8cd1bf009" podStartSLOduration=1.020466486 podStartE2EDuration="1.020466486s" podCreationTimestamp="2025-03-25 01:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:09.009274858 +0000 UTC m=+1.148022482" watchObservedRunningTime="2025-03-25 01:38:09.020466486 +0000 UTC m=+1.159214110" Mar 25 01:38:09.046556 sudo[3406]: pam_unix(sudo:session): session closed for user root Mar 25 01:38:09.051525 kubelet[3373]: I0325 01:38:09.051468 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-b8cd1bf009" podStartSLOduration=1.051451742 podStartE2EDuration="1.051451742s" podCreationTimestamp="2025-03-25 01:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:09.03822179 +0000 UTC m=+1.176969314" watchObservedRunningTime="2025-03-25 01:38:09.051451742 +0000 UTC m=+1.190199366" Mar 25 01:38:10.632551 sudo[2199]: pam_unix(sudo:session): session closed for user root Mar 25 01:38:10.734564 sshd[2198]: Connection closed by 10.200.16.10 port 44578 Mar 25 01:38:10.735792 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Mar 25 01:38:10.739669 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:44578.service: Deactivated successfully. Mar 25 01:38:10.742117 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:38:10.742438 systemd[1]: session-9.scope: Consumed 4.814s CPU time, 283.6M memory peak. Mar 25 01:38:10.744883 systemd-logind[1704]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:38:10.746098 systemd-logind[1704]: Removed session 9. Mar 25 01:38:23.678708 kubelet[3373]: I0325 01:38:23.678668 3373 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:38:23.679202 containerd[1729]: time="2025-03-25T01:38:23.679046197Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:38:23.679561 kubelet[3373]: I0325 01:38:23.679428 3373 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:38:24.369543 kubelet[3373]: I0325 01:38:24.369439 3373 topology_manager.go:215] "Topology Admit Handler" podUID="aa861ec1-4807-4db6-9833-25d6ad744495" podNamespace="kube-system" podName="kube-proxy-5f52g" Mar 25 01:38:24.378398 kubelet[3373]: I0325 01:38:24.377884 3373 topology_manager.go:215] "Topology Admit Handler" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" podNamespace="kube-system" podName="cilium-lqs7z" Mar 25 01:38:24.384559 systemd[1]: Created slice kubepods-besteffort-podaa861ec1_4807_4db6_9833_25d6ad744495.slice - libcontainer container kubepods-besteffort-podaa861ec1_4807_4db6_9833_25d6ad744495.slice. Mar 25 01:38:24.403097 systemd[1]: Created slice kubepods-burstable-podabf9fc32_9588_4541_b62e_58efc1534cca.slice - libcontainer container kubepods-burstable-podabf9fc32_9588_4541_b62e_58efc1534cca.slice. Mar 25 01:38:24.539551 kubelet[3373]: I0325 01:38:24.539503 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-hubble-tls\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.539963 kubelet[3373]: I0325 01:38:24.539563 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2cpd\" (UniqueName: \"kubernetes.io/projected/aa861ec1-4807-4db6-9833-25d6ad744495-kube-api-access-g2cpd\") pod \"kube-proxy-5f52g\" (UID: \"aa861ec1-4807-4db6-9833-25d6ad744495\") " pod="kube-system/kube-proxy-5f52g" Mar 25 01:38:24.539963 kubelet[3373]: I0325 01:38:24.539598 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-config-path\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.539963 kubelet[3373]: I0325 01:38:24.539631 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-xtables-lock\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.539963 kubelet[3373]: I0325 01:38:24.539659 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-etc-cni-netd\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.539963 kubelet[3373]: I0325 01:38:24.539683 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa861ec1-4807-4db6-9833-25d6ad744495-kube-proxy\") pod \"kube-proxy-5f52g\" (UID: \"aa861ec1-4807-4db6-9833-25d6ad744495\") " pod="kube-system/kube-proxy-5f52g" Mar 25 01:38:24.540259 kubelet[3373]: I0325 01:38:24.539784 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa861ec1-4807-4db6-9833-25d6ad744495-lib-modules\") pod \"kube-proxy-5f52g\" (UID: \"aa861ec1-4807-4db6-9833-25d6ad744495\") " pod="kube-system/kube-proxy-5f52g" Mar 25 01:38:24.540259 kubelet[3373]: I0325 01:38:24.539818 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abf9fc32-9588-4541-b62e-58efc1534cca-clustermesh-secrets\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540259 kubelet[3373]: I0325 01:38:24.539847 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-kernel\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540259 kubelet[3373]: I0325 01:38:24.539873 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7vh\" (UniqueName: \"kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-kube-api-access-8c7vh\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540259 kubelet[3373]: I0325 01:38:24.539900 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-cgroup\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540466 kubelet[3373]: I0325 01:38:24.539943 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cni-path\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540466 kubelet[3373]: I0325 01:38:24.539967 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-lib-modules\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540466 kubelet[3373]: I0325 01:38:24.539997 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-net\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540466 kubelet[3373]: I0325 01:38:24.540025 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa861ec1-4807-4db6-9833-25d6ad744495-xtables-lock\") pod \"kube-proxy-5f52g\" (UID: \"aa861ec1-4807-4db6-9833-25d6ad744495\") " pod="kube-system/kube-proxy-5f52g" Mar 25 01:38:24.540466 kubelet[3373]: I0325 01:38:24.540057 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-run\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540466 kubelet[3373]: I0325 01:38:24.540104 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-bpf-maps\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.540669 kubelet[3373]: I0325 01:38:24.540130 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-hostproc\") pod \"cilium-lqs7z\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " pod="kube-system/cilium-lqs7z" Mar 25 01:38:24.696259 containerd[1729]: time="2025-03-25T01:38:24.696199783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5f52g,Uid:aa861ec1-4807-4db6-9833-25d6ad744495,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:24.710273 containerd[1729]: time="2025-03-25T01:38:24.710240211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqs7z,Uid:abf9fc32-9588-4541-b62e-58efc1534cca,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:24.765470 kubelet[3373]: I0325 01:38:24.765402 3373 topology_manager.go:215] "Topology Admit Handler" podUID="583567bf-5944-4dbc-9bec-d4c11784752d" podNamespace="kube-system" podName="cilium-operator-599987898-jjvnj" Mar 25 01:38:24.779618 systemd[1]: Created slice kubepods-besteffort-pod583567bf_5944_4dbc_9bec_d4c11784752d.slice - libcontainer container kubepods-besteffort-pod583567bf_5944_4dbc_9bec_d4c11784752d.slice. Mar 25 01:38:24.814907 containerd[1729]: time="2025-03-25T01:38:24.810413925Z" level=info msg="connecting to shim 0aea6f81a3c388bfad9674ed329ebc342fb1ab2f5eb5b1abc4399a03b696c31e" address="unix:///run/containerd/s/3752fa0b2f4be9f0759f68523cda3000843ba4822a26936383bc40e5cdccfd00" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:24.830397 containerd[1729]: time="2025-03-25T01:38:24.830342307Z" level=info msg="connecting to shim 4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1" address="unix:///run/containerd/s/01b175368aa56ff89549f048acb243905398f11ef8fbed47c927829179c8e6de" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:24.845156 kubelet[3373]: I0325 01:38:24.845013 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/583567bf-5944-4dbc-9bec-d4c11784752d-cilium-config-path\") pod \"cilium-operator-599987898-jjvnj\" (UID: \"583567bf-5944-4dbc-9bec-d4c11784752d\") " pod="kube-system/cilium-operator-599987898-jjvnj" Mar 25 01:38:24.845156 kubelet[3373]: I0325 01:38:24.845112 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9zq\" (UniqueName: \"kubernetes.io/projected/583567bf-5944-4dbc-9bec-d4c11784752d-kube-api-access-wx9zq\") pod \"cilium-operator-599987898-jjvnj\" (UID: \"583567bf-5944-4dbc-9bec-d4c11784752d\") " pod="kube-system/cilium-operator-599987898-jjvnj" Mar 25 01:38:24.867186 systemd[1]: Started cri-containerd-0aea6f81a3c388bfad9674ed329ebc342fb1ab2f5eb5b1abc4399a03b696c31e.scope - libcontainer container 0aea6f81a3c388bfad9674ed329ebc342fb1ab2f5eb5b1abc4399a03b696c31e. Mar 25 01:38:24.874709 systemd[1]: Started cri-containerd-4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1.scope - libcontainer container 4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1. Mar 25 01:38:24.907471 containerd[1729]: time="2025-03-25T01:38:24.907419011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqs7z,Uid:abf9fc32-9588-4541-b62e-58efc1534cca,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\"" Mar 25 01:38:24.911348 containerd[1729]: time="2025-03-25T01:38:24.909749932Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:38:24.916878 containerd[1729]: time="2025-03-25T01:38:24.916850597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5f52g,Uid:aa861ec1-4807-4db6-9833-25d6ad744495,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aea6f81a3c388bfad9674ed329ebc342fb1ab2f5eb5b1abc4399a03b696c31e\"" Mar 25 01:38:24.923894 containerd[1729]: time="2025-03-25T01:38:24.923851661Z" level=info msg="CreateContainer within sandbox \"0aea6f81a3c388bfad9674ed329ebc342fb1ab2f5eb5b1abc4399a03b696c31e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:38:24.946111 containerd[1729]: time="2025-03-25T01:38:24.946078964Z" level=info msg="Container 50420a819d69c9f18b46aed130c91edd355d4c234a9906ebe79aee8594519c76: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:24.966208 containerd[1729]: time="2025-03-25T01:38:24.966105147Z" level=info msg="CreateContainer within sandbox \"0aea6f81a3c388bfad9674ed329ebc342fb1ab2f5eb5b1abc4399a03b696c31e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50420a819d69c9f18b46aed130c91edd355d4c234a9906ebe79aee8594519c76\"" Mar 25 01:38:24.967162 containerd[1729]: time="2025-03-25T01:38:24.967129056Z" level=info msg="StartContainer for \"50420a819d69c9f18b46aed130c91edd355d4c234a9906ebe79aee8594519c76\"" Mar 25 01:38:24.968637 containerd[1729]: time="2025-03-25T01:38:24.968603070Z" level=info msg="connecting to shim 50420a819d69c9f18b46aed130c91edd355d4c234a9906ebe79aee8594519c76" address="unix:///run/containerd/s/3752fa0b2f4be9f0759f68523cda3000843ba4822a26936383bc40e5cdccfd00" protocol=ttrpc version=3 Mar 25 01:38:24.986461 systemd[1]: Started cri-containerd-50420a819d69c9f18b46aed130c91edd355d4c234a9906ebe79aee8594519c76.scope - libcontainer container 50420a819d69c9f18b46aed130c91edd355d4c234a9906ebe79aee8594519c76. Mar 25 01:38:25.028582 containerd[1729]: time="2025-03-25T01:38:25.028541017Z" level=info msg="StartContainer for \"50420a819d69c9f18b46aed130c91edd355d4c234a9906ebe79aee8594519c76\" returns successfully" Mar 25 01:38:25.084895 containerd[1729]: time="2025-03-25T01:38:25.084844131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jjvnj,Uid:583567bf-5944-4dbc-9bec-d4c11784752d,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:25.128933 containerd[1729]: time="2025-03-25T01:38:25.128884533Z" level=info msg="connecting to shim d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55" address="unix:///run/containerd/s/d5843a699c61fd17d429aeca9524a16e097da5dd79a5e213bca7f31b621ab567" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:25.152442 systemd[1]: Started cri-containerd-d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55.scope - libcontainer container d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55. Mar 25 01:38:25.204908 containerd[1729]: time="2025-03-25T01:38:25.204850526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jjvnj,Uid:583567bf-5944-4dbc-9bec-d4c11784752d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\"" Mar 25 01:38:27.966790 kubelet[3373]: I0325 01:38:27.966647 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5f52g" podStartSLOduration=3.966626448 podStartE2EDuration="3.966626448s" podCreationTimestamp="2025-03-25 01:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:26.027696838 +0000 UTC m=+18.166444462" watchObservedRunningTime="2025-03-25 01:38:27.966626448 +0000 UTC m=+20.105374072" Mar 25 01:38:31.127392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755810642.mount: Deactivated successfully. Mar 25 01:38:33.301294 containerd[1729]: time="2025-03-25T01:38:33.301243984Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:33.302986 containerd[1729]: time="2025-03-25T01:38:33.302922699Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 25 01:38:33.306214 containerd[1729]: time="2025-03-25T01:38:33.306160228Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:33.307541 containerd[1729]: time="2025-03-25T01:38:33.307398840Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.397607907s" Mar 25 01:38:33.307541 containerd[1729]: time="2025-03-25T01:38:33.307438940Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 25 01:38:33.308917 containerd[1729]: time="2025-03-25T01:38:33.308643551Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:38:33.310168 containerd[1729]: time="2025-03-25T01:38:33.310059164Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:38:33.339303 containerd[1729]: time="2025-03-25T01:38:33.339247931Z" level=info msg="Container 814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:33.358017 containerd[1729]: time="2025-03-25T01:38:33.357975002Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\"" Mar 25 01:38:33.358486 containerd[1729]: time="2025-03-25T01:38:33.358457806Z" level=info msg="StartContainer for \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\"" Mar 25 01:38:33.359903 containerd[1729]: time="2025-03-25T01:38:33.359724618Z" level=info msg="connecting to shim 814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122" address="unix:///run/containerd/s/01b175368aa56ff89549f048acb243905398f11ef8fbed47c927829179c8e6de" protocol=ttrpc version=3 Mar 25 01:38:33.381446 systemd[1]: Started cri-containerd-814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122.scope - libcontainer container 814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122. Mar 25 01:38:33.410841 containerd[1729]: time="2025-03-25T01:38:33.409852476Z" level=info msg="StartContainer for \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" returns successfully" Mar 25 01:38:33.419371 systemd[1]: cri-containerd-814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122.scope: Deactivated successfully. Mar 25 01:38:33.421680 containerd[1729]: time="2025-03-25T01:38:33.421650784Z" level=info msg="received exit event container_id:\"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" id:\"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" pid:3778 exited_at:{seconds:1742866713 nanos:421248880}" Mar 25 01:38:33.421892 containerd[1729]: time="2025-03-25T01:38:33.421740984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" id:\"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" pid:3778 exited_at:{seconds:1742866713 nanos:421248880}" Mar 25 01:38:33.442175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122-rootfs.mount: Deactivated successfully. Mar 25 01:38:37.930005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286779847.mount: Deactivated successfully. Mar 25 01:38:38.050362 containerd[1729]: time="2025-03-25T01:38:38.049951623Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:38:38.075792 containerd[1729]: time="2025-03-25T01:38:38.074176351Z" level=info msg="Container b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:38.088247 containerd[1729]: time="2025-03-25T01:38:38.088097183Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\"" Mar 25 01:38:38.090306 containerd[1729]: time="2025-03-25T01:38:38.088913991Z" level=info msg="StartContainer for \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\"" Mar 25 01:38:38.090306 containerd[1729]: time="2025-03-25T01:38:38.089833699Z" level=info msg="connecting to shim b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d" address="unix:///run/containerd/s/01b175368aa56ff89549f048acb243905398f11ef8fbed47c927829179c8e6de" protocol=ttrpc version=3 Mar 25 01:38:38.118468 systemd[1]: Started cri-containerd-b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d.scope - libcontainer container b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d. Mar 25 01:38:38.158371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:38:38.158842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:38:38.159253 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:38:38.161868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:38:38.164424 containerd[1729]: time="2025-03-25T01:38:38.162654987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" id:\"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" pid:3829 exited_at:{seconds:1742866718 nanos:162107082}" Mar 25 01:38:38.165531 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:38:38.166196 systemd[1]: cri-containerd-b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d.scope: Deactivated successfully. Mar 25 01:38:38.186565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:38:38.189941 containerd[1729]: time="2025-03-25T01:38:38.189783244Z" level=info msg="received exit event container_id:\"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" id:\"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" pid:3829 exited_at:{seconds:1742866718 nanos:162107082}" Mar 25 01:38:38.191523 containerd[1729]: time="2025-03-25T01:38:38.191376359Z" level=info msg="StartContainer for \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" returns successfully" Mar 25 01:38:38.707022 containerd[1729]: time="2025-03-25T01:38:38.706974429Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:38.709006 containerd[1729]: time="2025-03-25T01:38:38.708931348Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 25 01:38:38.714385 containerd[1729]: time="2025-03-25T01:38:38.714355099Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:38.715825 containerd[1729]: time="2025-03-25T01:38:38.715690411Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.40698976s" Mar 25 01:38:38.715825 containerd[1729]: time="2025-03-25T01:38:38.715729412Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 25 01:38:38.718289 containerd[1729]: time="2025-03-25T01:38:38.718241735Z" level=info msg="CreateContainer within sandbox \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:38:38.740683 containerd[1729]: time="2025-03-25T01:38:38.740637547Z" level=info msg="Container 85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:38.755689 containerd[1729]: time="2025-03-25T01:38:38.755625589Z" level=info msg="CreateContainer within sandbox \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\"" Mar 25 01:38:38.756133 containerd[1729]: time="2025-03-25T01:38:38.756089893Z" level=info msg="StartContainer for \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\"" Mar 25 01:38:38.757583 containerd[1729]: time="2025-03-25T01:38:38.757554607Z" level=info msg="connecting to shim 85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd" address="unix:///run/containerd/s/d5843a699c61fd17d429aeca9524a16e097da5dd79a5e213bca7f31b621ab567" protocol=ttrpc version=3 Mar 25 01:38:38.775603 systemd[1]: Started cri-containerd-85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd.scope - libcontainer container 85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd. Mar 25 01:38:38.812754 containerd[1729]: time="2025-03-25T01:38:38.812657827Z" level=info msg="StartContainer for \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" returns successfully" Mar 25 01:38:38.923091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d-rootfs.mount: Deactivated successfully. Mar 25 01:38:39.065318 containerd[1729]: time="2025-03-25T01:38:39.065174113Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:38:39.096613 containerd[1729]: time="2025-03-25T01:38:39.096008604Z" level=info msg="Container 95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:39.103131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628075451.mount: Deactivated successfully. Mar 25 01:38:39.132923 containerd[1729]: time="2025-03-25T01:38:39.132612050Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\"" Mar 25 01:38:39.133948 containerd[1729]: time="2025-03-25T01:38:39.133896662Z" level=info msg="StartContainer for \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\"" Mar 25 01:38:39.136453 containerd[1729]: time="2025-03-25T01:38:39.136184183Z" level=info msg="connecting to shim 95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e" address="unix:///run/containerd/s/01b175368aa56ff89549f048acb243905398f11ef8fbed47c927829179c8e6de" protocol=ttrpc version=3 Mar 25 01:38:39.141610 kubelet[3373]: I0325 01:38:39.141207 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-jjvnj" podStartSLOduration=1.6319526610000001 podStartE2EDuration="15.141183931s" podCreationTimestamp="2025-03-25 01:38:24 +0000 UTC" firstStartedPulling="2025-03-25 01:38:25.20740115 +0000 UTC m=+17.346148674" lastFinishedPulling="2025-03-25 01:38:38.71663242 +0000 UTC m=+30.855379944" observedRunningTime="2025-03-25 01:38:39.086797317 +0000 UTC m=+31.225544941" watchObservedRunningTime="2025-03-25 01:38:39.141183931 +0000 UTC m=+31.279931555" Mar 25 01:38:39.172751 systemd[1]: Started cri-containerd-95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e.scope - libcontainer container 95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e. Mar 25 01:38:39.296693 systemd[1]: cri-containerd-95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e.scope: Deactivated successfully. Mar 25 01:38:39.305565 containerd[1729]: time="2025-03-25T01:38:39.305524983Z" level=info msg="received exit event container_id:\"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" id:\"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" pid:3918 exited_at:{seconds:1742866719 nanos:303801567}" Mar 25 01:38:39.305838 containerd[1729]: time="2025-03-25T01:38:39.305719885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" id:\"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" pid:3918 exited_at:{seconds:1742866719 nanos:303801567}" Mar 25 01:38:39.307082 containerd[1729]: time="2025-03-25T01:38:39.307052598Z" level=info msg="StartContainer for \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" returns successfully" Mar 25 01:38:39.356413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e-rootfs.mount: Deactivated successfully. Mar 25 01:38:40.067144 containerd[1729]: time="2025-03-25T01:38:40.066487971Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:38:40.094821 containerd[1729]: time="2025-03-25T01:38:40.093262624Z" level=info msg="Container c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:40.109168 containerd[1729]: time="2025-03-25T01:38:40.109125974Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\"" Mar 25 01:38:40.110688 containerd[1729]: time="2025-03-25T01:38:40.109649279Z" level=info msg="StartContainer for \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\"" Mar 25 01:38:40.110688 containerd[1729]: time="2025-03-25T01:38:40.110534887Z" level=info msg="connecting to shim c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c" address="unix:///run/containerd/s/01b175368aa56ff89549f048acb243905398f11ef8fbed47c927829179c8e6de" protocol=ttrpc version=3 Mar 25 01:38:40.131448 systemd[1]: Started cri-containerd-c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c.scope - libcontainer container c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c. Mar 25 01:38:40.158366 systemd[1]: cri-containerd-c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c.scope: Deactivated successfully. Mar 25 01:38:40.160816 containerd[1729]: time="2025-03-25T01:38:40.160771862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" id:\"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" pid:3957 exited_at:{seconds:1742866720 nanos:160523360}" Mar 25 01:38:40.166819 containerd[1729]: time="2025-03-25T01:38:40.166679518Z" level=info msg="received exit event container_id:\"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" id:\"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" pid:3957 exited_at:{seconds:1742866720 nanos:160523360}" Mar 25 01:38:40.173929 containerd[1729]: time="2025-03-25T01:38:40.173787685Z" level=info msg="StartContainer for \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" returns successfully" Mar 25 01:38:40.187687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c-rootfs.mount: Deactivated successfully. Mar 25 01:38:41.072130 containerd[1729]: time="2025-03-25T01:38:41.072056870Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:38:41.096485 containerd[1729]: time="2025-03-25T01:38:41.096438901Z" level=info msg="Container 367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:41.110911 containerd[1729]: time="2025-03-25T01:38:41.110867137Z" level=info msg="CreateContainer within sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\"" Mar 25 01:38:41.112556 containerd[1729]: time="2025-03-25T01:38:41.111446042Z" level=info msg="StartContainer for \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\"" Mar 25 01:38:41.113691 containerd[1729]: time="2025-03-25T01:38:41.113264359Z" level=info msg="connecting to shim 367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba" address="unix:///run/containerd/s/01b175368aa56ff89549f048acb243905398f11ef8fbed47c927829179c8e6de" protocol=ttrpc version=3 Mar 25 01:38:41.138458 systemd[1]: Started cri-containerd-367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba.scope - libcontainer container 367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba. Mar 25 01:38:41.173819 containerd[1729]: time="2025-03-25T01:38:41.173060824Z" level=info msg="StartContainer for \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" returns successfully" Mar 25 01:38:41.240570 containerd[1729]: time="2025-03-25T01:38:41.240527562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" id:\"cf3780e6ba09a1e7ef3460ee254a7854bde19ccd376a865271f3dae811c08f0c\" pid:4024 exited_at:{seconds:1742866721 nanos:240114258}" Mar 25 01:38:41.320937 kubelet[3373]: I0325 01:38:41.320906 3373 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 25 01:38:41.355779 kubelet[3373]: I0325 01:38:41.354227 3373 topology_manager.go:215] "Topology Admit Handler" podUID="c3ded3bb-eb0c-4568-8e7d-d35c91baaab1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-28b5p" Mar 25 01:38:41.369463 kubelet[3373]: I0325 01:38:41.369406 3373 topology_manager.go:215] "Topology Admit Handler" podUID="a2a3bf26-fd18-4aa1-8b80-6576c4847187" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4thv6" Mar 25 01:38:41.380459 systemd[1]: Created slice kubepods-burstable-podc3ded3bb_eb0c_4568_8e7d_d35c91baaab1.slice - libcontainer container kubepods-burstable-podc3ded3bb_eb0c_4568_8e7d_d35c91baaab1.slice. Mar 25 01:38:41.396976 systemd[1]: Created slice kubepods-burstable-poda2a3bf26_fd18_4aa1_8b80_6576c4847187.slice - libcontainer container kubepods-burstable-poda2a3bf26_fd18_4aa1_8b80_6576c4847187.slice. Mar 25 01:38:41.459313 kubelet[3373]: I0325 01:38:41.459223 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ded3bb-eb0c-4568-8e7d-d35c91baaab1-config-volume\") pod \"coredns-7db6d8ff4d-28b5p\" (UID: \"c3ded3bb-eb0c-4568-8e7d-d35c91baaab1\") " pod="kube-system/coredns-7db6d8ff4d-28b5p" Mar 25 01:38:41.460024 kubelet[3373]: I0325 01:38:41.459858 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qks4p\" (UniqueName: \"kubernetes.io/projected/c3ded3bb-eb0c-4568-8e7d-d35c91baaab1-kube-api-access-qks4p\") pod \"coredns-7db6d8ff4d-28b5p\" (UID: \"c3ded3bb-eb0c-4568-8e7d-d35c91baaab1\") " pod="kube-system/coredns-7db6d8ff4d-28b5p" Mar 25 01:38:41.562998 kubelet[3373]: I0325 01:38:41.561305 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trmnr\" (UniqueName: \"kubernetes.io/projected/a2a3bf26-fd18-4aa1-8b80-6576c4847187-kube-api-access-trmnr\") pod \"coredns-7db6d8ff4d-4thv6\" (UID: \"a2a3bf26-fd18-4aa1-8b80-6576c4847187\") " pod="kube-system/coredns-7db6d8ff4d-4thv6" Mar 25 01:38:41.562998 kubelet[3373]: I0325 01:38:41.561392 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2a3bf26-fd18-4aa1-8b80-6576c4847187-config-volume\") pod \"coredns-7db6d8ff4d-4thv6\" (UID: \"a2a3bf26-fd18-4aa1-8b80-6576c4847187\") " pod="kube-system/coredns-7db6d8ff4d-4thv6" Mar 25 01:38:41.690404 containerd[1729]: time="2025-03-25T01:38:41.689537503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-28b5p,Uid:c3ded3bb-eb0c-4568-8e7d-d35c91baaab1,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:41.705113 containerd[1729]: time="2025-03-25T01:38:41.705056850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4thv6,Uid:a2a3bf26-fd18-4aa1-8b80-6576c4847187,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:43.368790 systemd-networkd[1560]: cilium_host: Link UP Mar 25 01:38:43.368997 systemd-networkd[1560]: cilium_net: Link UP Mar 25 01:38:43.369202 systemd-networkd[1560]: cilium_net: Gained carrier Mar 25 01:38:43.371569 systemd-networkd[1560]: cilium_host: Gained carrier Mar 25 01:38:43.519757 systemd-networkd[1560]: cilium_net: Gained IPv6LL Mar 25 01:38:43.526930 systemd-networkd[1560]: cilium_vxlan: Link UP Mar 25 01:38:43.526940 systemd-networkd[1560]: cilium_vxlan: Gained carrier Mar 25 01:38:43.799371 kernel: NET: Registered PF_ALG protocol family Mar 25 01:38:44.239695 systemd-networkd[1560]: cilium_host: Gained IPv6LL Mar 25 01:38:44.521791 systemd-networkd[1560]: lxc_health: Link UP Mar 25 01:38:44.522146 systemd-networkd[1560]: lxc_health: Gained carrier Mar 25 01:38:44.738402 kernel: eth0: renamed from tmpa741a Mar 25 01:38:44.736687 systemd-networkd[1560]: lxcf268ce8fb474: Link UP Mar 25 01:38:44.740787 kubelet[3373]: I0325 01:38:44.739959 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lqs7z" podStartSLOduration=12.340599317 podStartE2EDuration="20.739647938s" podCreationTimestamp="2025-03-25 01:38:24 +0000 UTC" firstStartedPulling="2025-03-25 01:38:24.909316828 +0000 UTC m=+17.048064352" lastFinishedPulling="2025-03-25 01:38:33.308365349 +0000 UTC m=+25.447112973" observedRunningTime="2025-03-25 01:38:42.09044709 +0000 UTC m=+34.229194714" watchObservedRunningTime="2025-03-25 01:38:44.739647938 +0000 UTC m=+36.878395462" Mar 25 01:38:44.745850 systemd-networkd[1560]: lxcf268ce8fb474: Gained carrier Mar 25 01:38:44.762586 systemd-networkd[1560]: lxc13a82ec38b79: Link UP Mar 25 01:38:44.773915 kernel: eth0: renamed from tmp68c5f Mar 25 01:38:44.786507 systemd-networkd[1560]: lxc13a82ec38b79: Gained carrier Mar 25 01:38:44.943516 systemd-networkd[1560]: cilium_vxlan: Gained IPv6LL Mar 25 01:38:45.904573 systemd-networkd[1560]: lxc13a82ec38b79: Gained IPv6LL Mar 25 01:38:46.223453 systemd-networkd[1560]: lxc_health: Gained IPv6LL Mar 25 01:38:46.735483 systemd-networkd[1560]: lxcf268ce8fb474: Gained IPv6LL Mar 25 01:38:48.685863 containerd[1729]: time="2025-03-25T01:38:48.685812999Z" level=info msg="connecting to shim 68c5f0fb8218a0dce02db2e97be470106708d725cb685d691f812f6ad9372ee5" address="unix:///run/containerd/s/a4a9586f53a3ed3e3efad1c623860eeccbc89e45df62a2941d3cc4afe9072438" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:48.730006 containerd[1729]: time="2025-03-25T01:38:48.729953608Z" level=info msg="connecting to shim a741a4c93c4c32290430dca23c4eb5367b239265b06499c36bdca8bfefbcdd6e" address="unix:///run/containerd/s/7e4a321c1acb93c6bb4cc36232ee4fae7c4aa061e433ec37c66c55a313f224ee" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:48.745466 systemd[1]: Started cri-containerd-68c5f0fb8218a0dce02db2e97be470106708d725cb685d691f812f6ad9372ee5.scope - libcontainer container 68c5f0fb8218a0dce02db2e97be470106708d725cb685d691f812f6ad9372ee5. Mar 25 01:38:48.768860 systemd[1]: Started cri-containerd-a741a4c93c4c32290430dca23c4eb5367b239265b06499c36bdca8bfefbcdd6e.scope - libcontainer container a741a4c93c4c32290430dca23c4eb5367b239265b06499c36bdca8bfefbcdd6e. Mar 25 01:38:48.834222 containerd[1729]: time="2025-03-25T01:38:48.834109773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4thv6,Uid:a2a3bf26-fd18-4aa1-8b80-6576c4847187,Namespace:kube-system,Attempt:0,} returns sandbox id \"68c5f0fb8218a0dce02db2e97be470106708d725cb685d691f812f6ad9372ee5\"" Mar 25 01:38:48.839622 containerd[1729]: time="2025-03-25T01:38:48.839539123Z" level=info msg="CreateContainer within sandbox \"68c5f0fb8218a0dce02db2e97be470106708d725cb685d691f812f6ad9372ee5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:38:48.873320 containerd[1729]: time="2025-03-25T01:38:48.872041125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-28b5p,Uid:c3ded3bb-eb0c-4568-8e7d-d35c91baaab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a741a4c93c4c32290430dca23c4eb5367b239265b06499c36bdca8bfefbcdd6e\"" Mar 25 01:38:48.876088 containerd[1729]: time="2025-03-25T01:38:48.876041062Z" level=info msg="CreateContainer within sandbox \"a741a4c93c4c32290430dca23c4eb5367b239265b06499c36bdca8bfefbcdd6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:38:48.885664 containerd[1729]: time="2025-03-25T01:38:48.885638051Z" level=info msg="Container 3a438e5c8fbab9293399328b875145b339cdd7152e3f02b68b4515ff661f481b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:48.902796 containerd[1729]: time="2025-03-25T01:38:48.902692509Z" level=info msg="CreateContainer within sandbox \"68c5f0fb8218a0dce02db2e97be470106708d725cb685d691f812f6ad9372ee5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a438e5c8fbab9293399328b875145b339cdd7152e3f02b68b4515ff661f481b\"" Mar 25 01:38:48.904723 containerd[1729]: time="2025-03-25T01:38:48.904389724Z" level=info msg="StartContainer for \"3a438e5c8fbab9293399328b875145b339cdd7152e3f02b68b4515ff661f481b\"" Mar 25 01:38:48.905348 containerd[1729]: time="2025-03-25T01:38:48.905288133Z" level=info msg="connecting to shim 3a438e5c8fbab9293399328b875145b339cdd7152e3f02b68b4515ff661f481b" address="unix:///run/containerd/s/a4a9586f53a3ed3e3efad1c623860eeccbc89e45df62a2941d3cc4afe9072438" protocol=ttrpc version=3 Mar 25 01:38:48.920527 containerd[1729]: time="2025-03-25T01:38:48.920387473Z" level=info msg="Container 7e25d87832c28fb31d7a9b966caf0ab0f252fc36b523a41c0c547fc31316bbcd: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:48.924476 systemd[1]: Started cri-containerd-3a438e5c8fbab9293399328b875145b339cdd7152e3f02b68b4515ff661f481b.scope - libcontainer container 3a438e5c8fbab9293399328b875145b339cdd7152e3f02b68b4515ff661f481b. Mar 25 01:38:48.934110 containerd[1729]: time="2025-03-25T01:38:48.933977198Z" level=info msg="CreateContainer within sandbox \"a741a4c93c4c32290430dca23c4eb5367b239265b06499c36bdca8bfefbcdd6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e25d87832c28fb31d7a9b966caf0ab0f252fc36b523a41c0c547fc31316bbcd\"" Mar 25 01:38:48.935310 containerd[1729]: time="2025-03-25T01:38:48.934667805Z" level=info msg="StartContainer for \"7e25d87832c28fb31d7a9b966caf0ab0f252fc36b523a41c0c547fc31316bbcd\"" Mar 25 01:38:48.935563 containerd[1729]: time="2025-03-25T01:38:48.935528213Z" level=info msg="connecting to shim 7e25d87832c28fb31d7a9b966caf0ab0f252fc36b523a41c0c547fc31316bbcd" address="unix:///run/containerd/s/7e4a321c1acb93c6bb4cc36232ee4fae7c4aa061e433ec37c66c55a313f224ee" protocol=ttrpc version=3 Mar 25 01:38:48.960629 systemd[1]: Started cri-containerd-7e25d87832c28fb31d7a9b966caf0ab0f252fc36b523a41c0c547fc31316bbcd.scope - libcontainer container 7e25d87832c28fb31d7a9b966caf0ab0f252fc36b523a41c0c547fc31316bbcd. Mar 25 01:38:48.971921 containerd[1729]: time="2025-03-25T01:38:48.971883450Z" level=info msg="StartContainer for \"3a438e5c8fbab9293399328b875145b339cdd7152e3f02b68b4515ff661f481b\" returns successfully" Mar 25 01:38:49.005359 containerd[1729]: time="2025-03-25T01:38:49.005237459Z" level=info msg="StartContainer for \"7e25d87832c28fb31d7a9b966caf0ab0f252fc36b523a41c0c547fc31316bbcd\" returns successfully" Mar 25 01:38:49.114195 kubelet[3373]: I0325 01:38:49.113389 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-28b5p" podStartSLOduration=25.113368961 podStartE2EDuration="25.113368961s" podCreationTimestamp="2025-03-25 01:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:49.108829018 +0000 UTC m=+41.247576642" watchObservedRunningTime="2025-03-25 01:38:49.113368961 +0000 UTC m=+41.252116585" Mar 25 01:38:49.150559 kubelet[3373]: I0325 01:38:49.150404 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4thv6" podStartSLOduration=25.150383403 podStartE2EDuration="25.150383403s" podCreationTimestamp="2025-03-25 01:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:49.149023391 +0000 UTC m=+41.287771015" watchObservedRunningTime="2025-03-25 01:38:49.150383403 +0000 UTC m=+41.289130927" Mar 25 01:39:25.358991 waagent[1960]: 2025-03-25T01:39:25.358925Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 25 01:39:25.368624 waagent[1960]: 2025-03-25T01:39:25.368573Z INFO ExtHandler Mar 25 01:39:25.368741 waagent[1960]: 2025-03-25T01:39:25.368684Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e89fbe0b-5247-4b7d-a2fc-93d62b2c949a eTag: 17186836209311880314 source: Fabric] Mar 25 01:39:25.369073 waagent[1960]: 2025-03-25T01:39:25.369026Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 25 01:39:25.369687 waagent[1960]: 2025-03-25T01:39:25.369636Z INFO ExtHandler Mar 25 01:39:25.369768 waagent[1960]: 2025-03-25T01:39:25.369719Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 25 01:39:25.438362 waagent[1960]: 2025-03-25T01:39:25.438303Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 25 01:39:25.502526 waagent[1960]: 2025-03-25T01:39:25.502453Z INFO ExtHandler Downloaded certificate {'thumbprint': '8D0755C5F996FBF5C0D46CD2F2C9011DF28F12F7', 'hasPrivateKey': False} Mar 25 01:39:25.502920 waagent[1960]: 2025-03-25T01:39:25.502875Z INFO ExtHandler Downloaded certificate {'thumbprint': '8CBA7404DA26697F91B20EF5A07BF2FD0FB827F0', 'hasPrivateKey': True} Mar 25 01:39:25.503386 waagent[1960]: 2025-03-25T01:39:25.503343Z INFO ExtHandler Fetch goal state completed Mar 25 01:39:25.504320 waagent[1960]: 2025-03-25T01:39:25.503718Z INFO ExtHandler ExtHandler Mar 25 01:39:25.504320 waagent[1960]: 2025-03-25T01:39:25.503803Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: be9fb621-3c13-4d48-afdb-62155d76ebb8 correlation 67a44ab6-87ff-4e5d-acbb-45c4cda0a4c9 created: 2025-03-25T01:39:19.878487Z] Mar 25 01:39:25.504320 waagent[1960]: 2025-03-25T01:39:25.504047Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 25 01:39:25.504688 waagent[1960]: 2025-03-25T01:39:25.504648Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Mar 25 01:39:46.351586 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:38614.service - OpenSSH per-connection server daemon (10.200.16.10:38614). Mar 25 01:39:46.988720 sshd[4689]: Accepted publickey for core from 10.200.16.10 port 38614 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:39:46.990815 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:47.003467 systemd-logind[1704]: New session 10 of user core. Mar 25 01:39:47.007465 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:39:47.500364 sshd[4691]: Connection closed by 10.200.16.10 port 38614 Mar 25 01:39:47.501258 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:47.505426 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:38614.service: Deactivated successfully. Mar 25 01:39:47.507608 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:39:47.508561 systemd-logind[1704]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:39:47.509538 systemd-logind[1704]: Removed session 10. Mar 25 01:39:52.627911 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:57230.service - OpenSSH per-connection server daemon (10.200.16.10:57230). Mar 25 01:39:53.263473 sshd[4703]: Accepted publickey for core from 10.200.16.10 port 57230 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:39:53.264934 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:53.269270 systemd-logind[1704]: New session 11 of user core. Mar 25 01:39:53.275713 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:39:53.769437 sshd[4705]: Connection closed by 10.200.16.10 port 57230 Mar 25 01:39:53.770232 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:53.773406 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:57230.service: Deactivated successfully. Mar 25 01:39:53.775747 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:39:53.777330 systemd-logind[1704]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:39:53.778621 systemd-logind[1704]: Removed session 11. Mar 25 01:39:58.884528 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:57234.service - OpenSSH per-connection server daemon (10.200.16.10:57234). Mar 25 01:39:59.515371 sshd[4720]: Accepted publickey for core from 10.200.16.10 port 57234 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:39:59.517036 sshd-session[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:59.521362 systemd-logind[1704]: New session 12 of user core. Mar 25 01:39:59.525451 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:40:00.016777 sshd[4722]: Connection closed by 10.200.16.10 port 57234 Mar 25 01:40:00.017612 sshd-session[4720]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:00.020811 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:57234.service: Deactivated successfully. Mar 25 01:40:00.023077 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:40:00.024808 systemd-logind[1704]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:40:00.025987 systemd-logind[1704]: Removed session 12. Mar 25 01:40:05.129258 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:37000.service - OpenSSH per-connection server daemon (10.200.16.10:37000). Mar 25 01:40:05.886923 sshd[4735]: Accepted publickey for core from 10.200.16.10 port 37000 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:05.888484 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:05.892847 systemd-logind[1704]: New session 13 of user core. Mar 25 01:40:05.899455 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:40:06.398274 sshd[4737]: Connection closed by 10.200.16.10 port 37000 Mar 25 01:40:06.398796 sshd-session[4735]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:06.404042 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:37000.service: Deactivated successfully. Mar 25 01:40:06.406663 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:40:06.407868 systemd-logind[1704]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:40:06.408980 systemd-logind[1704]: Removed session 13. Mar 25 01:40:06.510248 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:37010.service - OpenSSH per-connection server daemon (10.200.16.10:37010). Mar 25 01:40:07.144384 sshd[4750]: Accepted publickey for core from 10.200.16.10 port 37010 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:07.145810 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:07.150217 systemd-logind[1704]: New session 14 of user core. Mar 25 01:40:07.156454 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:40:07.675261 sshd[4752]: Connection closed by 10.200.16.10 port 37010 Mar 25 01:40:07.676093 sshd-session[4750]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:07.679208 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:37010.service: Deactivated successfully. Mar 25 01:40:07.681572 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:40:07.683258 systemd-logind[1704]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:40:07.684564 systemd-logind[1704]: Removed session 14. Mar 25 01:40:07.789574 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:37018.service - OpenSSH per-connection server daemon (10.200.16.10:37018). Mar 25 01:40:08.442358 sshd[4761]: Accepted publickey for core from 10.200.16.10 port 37018 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:08.444018 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:08.449119 systemd-logind[1704]: New session 15 of user core. Mar 25 01:40:08.453438 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:40:08.947328 sshd[4765]: Connection closed by 10.200.16.10 port 37018 Mar 25 01:40:08.945692 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:08.950510 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:37018.service: Deactivated successfully. Mar 25 01:40:08.952815 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:40:08.953636 systemd-logind[1704]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:40:08.954644 systemd-logind[1704]: Removed session 15. Mar 25 01:40:14.059255 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:41962.service - OpenSSH per-connection server daemon (10.200.16.10:41962). Mar 25 01:40:14.696672 sshd[4777]: Accepted publickey for core from 10.200.16.10 port 41962 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:14.698451 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:14.704002 systemd-logind[1704]: New session 16 of user core. Mar 25 01:40:14.709445 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:40:15.197648 sshd[4779]: Connection closed by 10.200.16.10 port 41962 Mar 25 01:40:15.198846 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:15.203034 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:41962.service: Deactivated successfully. Mar 25 01:40:15.205271 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:40:15.206297 systemd-logind[1704]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:40:15.207300 systemd-logind[1704]: Removed session 16. Mar 25 01:40:20.311987 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:38360.service - OpenSSH per-connection server daemon (10.200.16.10:38360). Mar 25 01:40:20.945123 sshd[4791]: Accepted publickey for core from 10.200.16.10 port 38360 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:20.946701 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:20.951992 systemd-logind[1704]: New session 17 of user core. Mar 25 01:40:20.957603 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:40:21.444485 sshd[4793]: Connection closed by 10.200.16.10 port 38360 Mar 25 01:40:21.445306 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:21.450071 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:38360.service: Deactivated successfully. Mar 25 01:40:21.452203 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:40:21.453115 systemd-logind[1704]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:40:21.454119 systemd-logind[1704]: Removed session 17. Mar 25 01:40:21.563590 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:38362.service - OpenSSH per-connection server daemon (10.200.16.10:38362). Mar 25 01:40:22.201934 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 38362 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:22.203677 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:22.208126 systemd-logind[1704]: New session 18 of user core. Mar 25 01:40:22.213436 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:40:22.766692 sshd[4807]: Connection closed by 10.200.16.10 port 38362 Mar 25 01:40:22.767667 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:22.772155 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:38362.service: Deactivated successfully. Mar 25 01:40:22.774570 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:40:22.775528 systemd-logind[1704]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:40:22.776528 systemd-logind[1704]: Removed session 18. Mar 25 01:40:22.882537 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:38368.service - OpenSSH per-connection server daemon (10.200.16.10:38368). Mar 25 01:40:23.513404 sshd[4817]: Accepted publickey for core from 10.200.16.10 port 38368 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:23.515033 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:23.519645 systemd-logind[1704]: New session 19 of user core. Mar 25 01:40:23.525428 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:40:25.518336 sshd[4819]: Connection closed by 10.200.16.10 port 38368 Mar 25 01:40:25.519144 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:25.522938 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:38368.service: Deactivated successfully. Mar 25 01:40:25.525679 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:40:25.527619 systemd-logind[1704]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:40:25.529002 systemd-logind[1704]: Removed session 19. Mar 25 01:40:25.632307 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:38374.service - OpenSSH per-connection server daemon (10.200.16.10:38374). Mar 25 01:40:26.273859 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 38374 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:26.275498 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:26.280437 systemd-logind[1704]: New session 20 of user core. Mar 25 01:40:26.286452 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:40:26.877020 sshd[4840]: Connection closed by 10.200.16.10 port 38374 Mar 25 01:40:26.877901 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:26.882621 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:38374.service: Deactivated successfully. Mar 25 01:40:26.885133 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:40:26.885984 systemd-logind[1704]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:40:26.887157 systemd-logind[1704]: Removed session 20. Mar 25 01:40:26.990248 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:38388.service - OpenSSH per-connection server daemon (10.200.16.10:38388). Mar 25 01:40:27.625024 sshd[4849]: Accepted publickey for core from 10.200.16.10 port 38388 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:27.626512 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:27.630968 systemd-logind[1704]: New session 21 of user core. Mar 25 01:40:27.639724 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:40:28.125378 sshd[4851]: Connection closed by 10.200.16.10 port 38388 Mar 25 01:40:28.126223 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:28.130007 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:38388.service: Deactivated successfully. Mar 25 01:40:28.132180 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:40:28.133160 systemd-logind[1704]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:40:28.134110 systemd-logind[1704]: Removed session 21. Mar 25 01:40:33.244512 systemd[1]: Started sshd@19-10.200.8.12:22-10.200.16.10:54876.service - OpenSSH per-connection server daemon (10.200.16.10:54876). Mar 25 01:40:33.995524 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 54876 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:33.997697 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:34.003571 systemd-logind[1704]: New session 22 of user core. Mar 25 01:40:34.010697 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:40:34.585411 sshd[4868]: Connection closed by 10.200.16.10 port 54876 Mar 25 01:40:34.586591 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:34.591156 systemd[1]: sshd@19-10.200.8.12:22-10.200.16.10:54876.service: Deactivated successfully. Mar 25 01:40:34.593791 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:40:34.594782 systemd-logind[1704]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:40:34.595887 systemd-logind[1704]: Removed session 22. Mar 25 01:40:39.706587 systemd[1]: Started sshd@20-10.200.8.12:22-10.200.16.10:57022.service - OpenSSH per-connection server daemon (10.200.16.10:57022). Mar 25 01:40:40.396858 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 57022 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:40.398449 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:40.402904 systemd-logind[1704]: New session 23 of user core. Mar 25 01:40:40.409443 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:40:40.909842 sshd[4882]: Connection closed by 10.200.16.10 port 57022 Mar 25 01:40:40.910956 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:40.914934 systemd[1]: sshd@20-10.200.8.12:22-10.200.16.10:57022.service: Deactivated successfully. Mar 25 01:40:40.917153 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:40:40.917957 systemd-logind[1704]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:40:40.919013 systemd-logind[1704]: Removed session 23. Mar 25 01:40:46.024241 systemd[1]: Started sshd@21-10.200.8.12:22-10.200.16.10:57030.service - OpenSSH per-connection server daemon (10.200.16.10:57030). Mar 25 01:40:46.656179 sshd[4893]: Accepted publickey for core from 10.200.16.10 port 57030 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:46.657728 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:46.662004 systemd-logind[1704]: New session 24 of user core. Mar 25 01:40:46.670427 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:40:47.155575 sshd[4895]: Connection closed by 10.200.16.10 port 57030 Mar 25 01:40:47.156479 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:47.160672 systemd[1]: sshd@21-10.200.8.12:22-10.200.16.10:57030.service: Deactivated successfully. Mar 25 01:40:47.162820 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:40:47.164503 systemd-logind[1704]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:40:47.166123 systemd-logind[1704]: Removed session 24. Mar 25 01:40:47.267541 systemd[1]: Started sshd@22-10.200.8.12:22-10.200.16.10:57042.service - OpenSSH per-connection server daemon (10.200.16.10:57042). Mar 25 01:40:47.897969 sshd[4907]: Accepted publickey for core from 10.200.16.10 port 57042 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:47.899773 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:47.904373 systemd-logind[1704]: New session 25 of user core. Mar 25 01:40:47.909414 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:40:49.832640 containerd[1729]: time="2025-03-25T01:40:49.832581234Z" level=info msg="StopContainer for \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" with timeout 30 (s)" Mar 25 01:40:49.834642 containerd[1729]: time="2025-03-25T01:40:49.833743744Z" level=info msg="Stop container \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" with signal terminated" Mar 25 01:40:49.855448 systemd[1]: cri-containerd-85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd.scope: Deactivated successfully. Mar 25 01:40:49.863780 containerd[1729]: time="2025-03-25T01:40:49.863062592Z" level=info msg="received exit event container_id:\"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" id:\"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" pid:3884 exited_at:{seconds:1742866849 nanos:862197485}" Mar 25 01:40:49.863780 containerd[1729]: time="2025-03-25T01:40:49.863676598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" id:\"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" pid:3884 exited_at:{seconds:1742866849 nanos:862197485}" Mar 25 01:40:49.872820 containerd[1729]: time="2025-03-25T01:40:49.872484272Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:40:49.880476 containerd[1729]: time="2025-03-25T01:40:49.880412040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" id:\"2e54b45fb8f971c47c8ab1681953af9de4e53bc7132b4c7b46c9bfcfff074aba\" pid:4935 exited_at:{seconds:1742866849 nanos:878668225}" Mar 25 01:40:49.884483 containerd[1729]: time="2025-03-25T01:40:49.884414274Z" level=info msg="StopContainer for \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" with timeout 2 (s)" Mar 25 01:40:49.885116 containerd[1729]: time="2025-03-25T01:40:49.885079579Z" level=info msg="Stop container \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" with signal terminated" Mar 25 01:40:49.891026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd-rootfs.mount: Deactivated successfully. Mar 25 01:40:49.897988 systemd-networkd[1560]: lxc_health: Link DOWN Mar 25 01:40:49.897997 systemd-networkd[1560]: lxc_health: Lost carrier Mar 25 01:40:49.912638 systemd[1]: cri-containerd-367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba.scope: Deactivated successfully. Mar 25 01:40:49.914095 systemd[1]: cri-containerd-367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba.scope: Consumed 7.430s CPU time, 126.3M memory peak, 136K read from disk, 13.3M written to disk. Mar 25 01:40:49.923376 containerd[1729]: time="2025-03-25T01:40:49.914552929Z" level=info msg="received exit event container_id:\"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" id:\"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" pid:3994 exited_at:{seconds:1742866849 nanos:914062925}" Mar 25 01:40:49.923376 containerd[1729]: time="2025-03-25T01:40:49.914988433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" id:\"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" pid:3994 exited_at:{seconds:1742866849 nanos:914062925}" Mar 25 01:40:49.937021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba-rootfs.mount: Deactivated successfully. Mar 25 01:40:49.941148 containerd[1729]: time="2025-03-25T01:40:49.941115255Z" level=info msg="StopContainer for \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" returns successfully" Mar 25 01:40:49.941990 containerd[1729]: time="2025-03-25T01:40:49.941969562Z" level=info msg="StopPodSandbox for \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\"" Mar 25 01:40:49.942235 containerd[1729]: time="2025-03-25T01:40:49.942131264Z" level=info msg="Container to stop \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:49.949692 systemd[1]: cri-containerd-d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55.scope: Deactivated successfully. Mar 25 01:40:49.953783 containerd[1729]: time="2025-03-25T01:40:49.953556860Z" level=info msg="StopContainer for \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" returns successfully" Mar 25 01:40:49.955157 containerd[1729]: time="2025-03-25T01:40:49.954954372Z" level=info msg="StopPodSandbox for \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\"" Mar 25 01:40:49.955157 containerd[1729]: time="2025-03-25T01:40:49.955018073Z" level=info msg="Container to stop \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:49.955157 containerd[1729]: time="2025-03-25T01:40:49.955032873Z" level=info msg="Container to stop \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:49.955157 containerd[1729]: time="2025-03-25T01:40:49.955045673Z" level=info msg="Container to stop \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:49.955157 containerd[1729]: time="2025-03-25T01:40:49.955058273Z" level=info msg="Container to stop \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:49.955157 containerd[1729]: time="2025-03-25T01:40:49.955071473Z" level=info msg="Container to stop \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:49.957512 containerd[1729]: time="2025-03-25T01:40:49.957394793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" id:\"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" pid:3636 exit_status:137 exited_at:{seconds:1742866849 nanos:956169883}" Mar 25 01:40:49.967563 systemd[1]: cri-containerd-4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1.scope: Deactivated successfully. Mar 25 01:40:49.996194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55-rootfs.mount: Deactivated successfully. Mar 25 01:40:50.007998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1-rootfs.mount: Deactivated successfully. Mar 25 01:40:50.010666 containerd[1729]: time="2025-03-25T01:40:50.010515944Z" level=info msg="shim disconnected" id=d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55 namespace=k8s.io Mar 25 01:40:50.010666 containerd[1729]: time="2025-03-25T01:40:50.010551944Z" level=warning msg="cleaning up after shim disconnected" id=d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55 namespace=k8s.io Mar 25 01:40:50.010666 containerd[1729]: time="2025-03-25T01:40:50.010563144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:40:50.012010 containerd[1729]: time="2025-03-25T01:40:50.011783955Z" level=info msg="shim disconnected" id=4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1 namespace=k8s.io Mar 25 01:40:50.012010 containerd[1729]: time="2025-03-25T01:40:50.011811455Z" level=warning msg="cleaning up after shim disconnected" id=4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1 namespace=k8s.io Mar 25 01:40:50.012010 containerd[1729]: time="2025-03-25T01:40:50.011823955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:40:50.028766 containerd[1729]: time="2025-03-25T01:40:50.028644998Z" level=info msg="received exit event sandbox_id:\"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" exit_status:137 exited_at:{seconds:1742866849 nanos:974593339}" Mar 25 01:40:50.029304 containerd[1729]: time="2025-03-25T01:40:50.029161902Z" level=info msg="TearDown network for sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" successfully" Mar 25 01:40:50.029304 containerd[1729]: time="2025-03-25T01:40:50.029215503Z" level=info msg="StopPodSandbox for \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" returns successfully" Mar 25 01:40:50.031127 containerd[1729]: time="2025-03-25T01:40:50.030926217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" id:\"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" pid:3520 exit_status:137 exited_at:{seconds:1742866849 nanos:974593339}" Mar 25 01:40:50.031455 containerd[1729]: time="2025-03-25T01:40:50.031349921Z" level=info msg="TearDown network for sandbox \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" successfully" Mar 25 01:40:50.031455 containerd[1729]: time="2025-03-25T01:40:50.031375121Z" level=info msg="StopPodSandbox for \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" returns successfully" Mar 25 01:40:50.031716 containerd[1729]: time="2025-03-25T01:40:50.031696024Z" level=info msg="received exit event sandbox_id:\"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" exit_status:137 exited_at:{seconds:1742866849 nanos:956169883}" Mar 25 01:40:50.034455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1-shm.mount: Deactivated successfully. Mar 25 01:40:50.137825 kubelet[3373]: I0325 01:40:50.137666 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-hostproc\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.137825 kubelet[3373]: I0325 01:40:50.137719 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cni-path\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.137825 kubelet[3373]: I0325 01:40:50.137743 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-lib-modules\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.137825 kubelet[3373]: I0325 01:40:50.137765 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-net\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.137825 kubelet[3373]: I0325 01:40:50.137791 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-run\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.139936 kubelet[3373]: I0325 01:40:50.137830 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c7vh\" (UniqueName: \"kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-kube-api-access-8c7vh\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.139936 kubelet[3373]: I0325 01:40:50.137857 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx9zq\" (UniqueName: \"kubernetes.io/projected/583567bf-5944-4dbc-9bec-d4c11784752d-kube-api-access-wx9zq\") pod \"583567bf-5944-4dbc-9bec-d4c11784752d\" (UID: \"583567bf-5944-4dbc-9bec-d4c11784752d\") " Mar 25 01:40:50.139936 kubelet[3373]: I0325 01:40:50.137884 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-config-path\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.139936 kubelet[3373]: I0325 01:40:50.137909 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abf9fc32-9588-4541-b62e-58efc1534cca-clustermesh-secrets\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.139936 kubelet[3373]: I0325 01:40:50.137935 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/583567bf-5944-4dbc-9bec-d4c11784752d-cilium-config-path\") pod \"583567bf-5944-4dbc-9bec-d4c11784752d\" (UID: \"583567bf-5944-4dbc-9bec-d4c11784752d\") " Mar 25 01:40:50.139936 kubelet[3373]: I0325 01:40:50.137962 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-bpf-maps\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.140254 kubelet[3373]: I0325 01:40:50.137989 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-xtables-lock\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.140254 kubelet[3373]: I0325 01:40:50.138016 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-etc-cni-netd\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.140254 kubelet[3373]: I0325 01:40:50.138042 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-kernel\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.140254 kubelet[3373]: I0325 01:40:50.138067 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-cgroup\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.140254 kubelet[3373]: I0325 01:40:50.138097 3373 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-hubble-tls\") pod \"abf9fc32-9588-4541-b62e-58efc1534cca\" (UID: \"abf9fc32-9588-4541-b62e-58efc1534cca\") " Mar 25 01:40:50.143361 kubelet[3373]: I0325 01:40:50.142349 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-hostproc" (OuterVolumeSpecName: "hostproc") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.143361 kubelet[3373]: I0325 01:40:50.142431 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cni-path" (OuterVolumeSpecName: "cni-path") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.143361 kubelet[3373]: I0325 01:40:50.142461 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.143361 kubelet[3373]: I0325 01:40:50.142485 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.143361 kubelet[3373]: I0325 01:40:50.142510 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.145963 kubelet[3373]: I0325 01:40:50.145917 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.146067 kubelet[3373]: I0325 01:40:50.145975 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.146067 kubelet[3373]: I0325 01:40:50.146003 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.146067 kubelet[3373]: I0325 01:40:50.146027 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.146067 kubelet[3373]: I0325 01:40:50.146055 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:50.151136 kubelet[3373]: I0325 01:40:50.151093 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583567bf-5944-4dbc-9bec-d4c11784752d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "583567bf-5944-4dbc-9bec-d4c11784752d" (UID: "583567bf-5944-4dbc-9bec-d4c11784752d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:40:50.151243 kubelet[3373]: I0325 01:40:50.151219 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:40:50.151805 kubelet[3373]: I0325 01:40:50.151564 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:40:50.151805 kubelet[3373]: I0325 01:40:50.151670 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-kube-api-access-8c7vh" (OuterVolumeSpecName: "kube-api-access-8c7vh") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "kube-api-access-8c7vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:40:50.152201 kubelet[3373]: I0325 01:40:50.152175 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583567bf-5944-4dbc-9bec-d4c11784752d-kube-api-access-wx9zq" (OuterVolumeSpecName: "kube-api-access-wx9zq") pod "583567bf-5944-4dbc-9bec-d4c11784752d" (UID: "583567bf-5944-4dbc-9bec-d4c11784752d"). InnerVolumeSpecName "kube-api-access-wx9zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:40:50.152746 kubelet[3373]: I0325 01:40:50.152721 3373 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf9fc32-9588-4541-b62e-58efc1534cca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "abf9fc32-9588-4541-b62e-58efc1534cca" (UID: "abf9fc32-9588-4541-b62e-58efc1534cca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:40:50.239140 kubelet[3373]: I0325 01:40:50.239077 3373 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-config-path\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239140 kubelet[3373]: I0325 01:40:50.239125 3373 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abf9fc32-9588-4541-b62e-58efc1534cca-clustermesh-secrets\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239140 kubelet[3373]: I0325 01:40:50.239145 3373 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/583567bf-5944-4dbc-9bec-d4c11784752d-cilium-config-path\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239158 3373 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-bpf-maps\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239173 3373 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-xtables-lock\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239188 3373 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-etc-cni-netd\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239202 3373 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-kernel\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239214 3373 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-cgroup\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239229 3373 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-hubble-tls\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239242 3373 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-hostproc\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239465 kubelet[3373]: I0325 01:40:50.239253 3373 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cni-path\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239703 kubelet[3373]: I0325 01:40:50.239265 3373 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-lib-modules\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239703 kubelet[3373]: I0325 01:40:50.239308 3373 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-host-proc-sys-net\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239703 kubelet[3373]: I0325 01:40:50.239325 3373 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abf9fc32-9588-4541-b62e-58efc1534cca-cilium-run\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239703 kubelet[3373]: I0325 01:40:50.239338 3373 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8c7vh\" (UniqueName: \"kubernetes.io/projected/abf9fc32-9588-4541-b62e-58efc1534cca-kube-api-access-8c7vh\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.239703 kubelet[3373]: I0325 01:40:50.239353 3373 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wx9zq\" (UniqueName: \"kubernetes.io/projected/583567bf-5944-4dbc-9bec-d4c11784752d-kube-api-access-wx9zq\") on node \"ci-4284.0.0-a-b8cd1bf009\" DevicePath \"\"" Mar 25 01:40:50.343333 kubelet[3373]: I0325 01:40:50.341119 3373 scope.go:117] "RemoveContainer" containerID="85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd" Mar 25 01:40:50.346221 containerd[1729]: time="2025-03-25T01:40:50.346179692Z" level=info msg="RemoveContainer for \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\"" Mar 25 01:40:50.349622 systemd[1]: Removed slice kubepods-besteffort-pod583567bf_5944_4dbc_9bec_d4c11784752d.slice - libcontainer container kubepods-besteffort-pod583567bf_5944_4dbc_9bec_d4c11784752d.slice. Mar 25 01:40:50.356935 systemd[1]: Removed slice kubepods-burstable-podabf9fc32_9588_4541_b62e_58efc1534cca.slice - libcontainer container kubepods-burstable-podabf9fc32_9588_4541_b62e_58efc1534cca.slice. Mar 25 01:40:50.357701 systemd[1]: kubepods-burstable-podabf9fc32_9588_4541_b62e_58efc1534cca.slice: Consumed 7.526s CPU time, 126.7M memory peak, 136K read from disk, 13.3M written to disk. Mar 25 01:40:50.360896 containerd[1729]: time="2025-03-25T01:40:50.360535214Z" level=info msg="RemoveContainer for \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" returns successfully" Mar 25 01:40:50.360994 kubelet[3373]: I0325 01:40:50.360802 3373 scope.go:117] "RemoveContainer" containerID="85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd" Mar 25 01:40:50.361078 containerd[1729]: time="2025-03-25T01:40:50.361038719Z" level=error msg="ContainerStatus for \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\": not found" Mar 25 01:40:50.361352 kubelet[3373]: E0325 01:40:50.361327 3373 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\": not found" containerID="85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd" Mar 25 01:40:50.361770 kubelet[3373]: I0325 01:40:50.361484 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd"} err="failed to get container status \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"85b09d31a91c85103f3d89ee776101564992e81200e725fa225eceac3bf89fdd\": not found" Mar 25 01:40:50.361770 kubelet[3373]: I0325 01:40:50.361566 3373 scope.go:117] "RemoveContainer" containerID="367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba" Mar 25 01:40:50.363214 containerd[1729]: time="2025-03-25T01:40:50.363188737Z" level=info msg="RemoveContainer for \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\"" Mar 25 01:40:50.370873 containerd[1729]: time="2025-03-25T01:40:50.370841202Z" level=info msg="RemoveContainer for \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" returns successfully" Mar 25 01:40:50.371036 kubelet[3373]: I0325 01:40:50.371011 3373 scope.go:117] "RemoveContainer" containerID="c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c" Mar 25 01:40:50.372450 containerd[1729]: time="2025-03-25T01:40:50.372424915Z" level=info msg="RemoveContainer for \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\"" Mar 25 01:40:50.381794 containerd[1729]: time="2025-03-25T01:40:50.381757794Z" level=info msg="RemoveContainer for \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" returns successfully" Mar 25 01:40:50.381957 kubelet[3373]: I0325 01:40:50.381944 3373 scope.go:117] "RemoveContainer" containerID="95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e" Mar 25 01:40:50.384074 containerd[1729]: time="2025-03-25T01:40:50.384030114Z" level=info msg="RemoveContainer for \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\"" Mar 25 01:40:50.393367 containerd[1729]: time="2025-03-25T01:40:50.393256492Z" level=info msg="RemoveContainer for \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" returns successfully" Mar 25 01:40:50.393706 kubelet[3373]: I0325 01:40:50.393624 3373 scope.go:117] "RemoveContainer" containerID="b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d" Mar 25 01:40:50.395553 containerd[1729]: time="2025-03-25T01:40:50.395455011Z" level=info msg="RemoveContainer for \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\"" Mar 25 01:40:50.409998 containerd[1729]: time="2025-03-25T01:40:50.409962734Z" level=info msg="RemoveContainer for \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" returns successfully" Mar 25 01:40:50.410384 kubelet[3373]: I0325 01:40:50.410141 3373 scope.go:117] "RemoveContainer" containerID="814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122" Mar 25 01:40:50.412630 containerd[1729]: time="2025-03-25T01:40:50.412600156Z" level=info msg="RemoveContainer for \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\"" Mar 25 01:40:50.430132 containerd[1729]: time="2025-03-25T01:40:50.430096405Z" level=info msg="RemoveContainer for \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" returns successfully" Mar 25 01:40:50.430352 kubelet[3373]: I0325 01:40:50.430304 3373 scope.go:117] "RemoveContainer" containerID="367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba" Mar 25 01:40:50.430649 containerd[1729]: time="2025-03-25T01:40:50.430603809Z" level=error msg="ContainerStatus for \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\": not found" Mar 25 01:40:50.430802 kubelet[3373]: E0325 01:40:50.430777 3373 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\": not found" containerID="367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba" Mar 25 01:40:50.430882 kubelet[3373]: I0325 01:40:50.430807 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba"} err="failed to get container status \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"367ebd5562a0557ba3569043d2da66e0d20298c1fd3880d7ba4553adc885f9ba\": not found" Mar 25 01:40:50.430882 kubelet[3373]: I0325 01:40:50.430833 3373 scope.go:117] "RemoveContainer" containerID="c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c" Mar 25 01:40:50.431083 containerd[1729]: time="2025-03-25T01:40:50.431020512Z" level=error msg="ContainerStatus for \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\": not found" Mar 25 01:40:50.431171 kubelet[3373]: E0325 01:40:50.431152 3373 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\": not found" containerID="c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c" Mar 25 01:40:50.431265 kubelet[3373]: I0325 01:40:50.431177 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c"} err="failed to get container status \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9e45bb49e862d35cebc6f8550b8bc43d20d39f8566bdd3287e4b305419db06c\": not found" Mar 25 01:40:50.431265 kubelet[3373]: I0325 01:40:50.431196 3373 scope.go:117] "RemoveContainer" containerID="95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e" Mar 25 01:40:50.431438 containerd[1729]: time="2025-03-25T01:40:50.431393516Z" level=error msg="ContainerStatus for \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\": not found" Mar 25 01:40:50.431576 kubelet[3373]: E0325 01:40:50.431544 3373 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\": not found" containerID="95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e" Mar 25 01:40:50.431686 kubelet[3373]: I0325 01:40:50.431569 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e"} err="failed to get container status \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"95d8647ba76f220c04d590f1dcffeb6b20cebcc3f5c965a9f89a1856077bdb0e\": not found" Mar 25 01:40:50.431686 kubelet[3373]: I0325 01:40:50.431589 3373 scope.go:117] "RemoveContainer" containerID="b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d" Mar 25 01:40:50.431806 containerd[1729]: time="2025-03-25T01:40:50.431768419Z" level=error msg="ContainerStatus for \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\": not found" Mar 25 01:40:50.431953 kubelet[3373]: E0325 01:40:50.431923 3373 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\": not found" containerID="b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d" Mar 25 01:40:50.431953 kubelet[3373]: I0325 01:40:50.431945 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d"} err="failed to get container status \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7c03a2537b37525a6c26e5f5ace1174881dcdd6cfa652d32dc9fb68427f663d\": not found" Mar 25 01:40:50.432067 kubelet[3373]: I0325 01:40:50.431963 3373 scope.go:117] "RemoveContainer" containerID="814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122" Mar 25 01:40:50.432227 containerd[1729]: time="2025-03-25T01:40:50.432179322Z" level=error msg="ContainerStatus for \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\": not found" Mar 25 01:40:50.432424 kubelet[3373]: E0325 01:40:50.432400 3373 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\": not found" containerID="814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122" Mar 25 01:40:50.432489 kubelet[3373]: I0325 01:40:50.432434 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122"} err="failed to get container status \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\": rpc error: code = NotFound desc = an error occurred when try to find container \"814cdf1ec726e13071b10d78c210c7d2fc7d3b95498e2f39e2a22271b6bab122\": not found" Mar 25 01:40:50.889811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55-shm.mount: Deactivated successfully. Mar 25 01:40:50.889970 systemd[1]: var-lib-kubelet-pods-583567bf\x2d5944\x2d4dbc\x2d9bec\x2dd4c11784752d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwx9zq.mount: Deactivated successfully. Mar 25 01:40:50.890061 systemd[1]: var-lib-kubelet-pods-abf9fc32\x2d9588\x2d4541\x2db62e\x2d58efc1534cca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8c7vh.mount: Deactivated successfully. Mar 25 01:40:50.890151 systemd[1]: var-lib-kubelet-pods-abf9fc32\x2d9588\x2d4541\x2db62e\x2d58efc1534cca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:40:50.890232 systemd[1]: var-lib-kubelet-pods-abf9fc32\x2d9588\x2d4541\x2db62e\x2d58efc1534cca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:40:51.885960 sshd[4909]: Connection closed by 10.200.16.10 port 57042 Mar 25 01:40:51.886805 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:51.889968 systemd[1]: sshd@22-10.200.8.12:22-10.200.16.10:57042.service: Deactivated successfully. Mar 25 01:40:51.892155 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:40:51.893746 systemd-logind[1704]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:40:51.894924 systemd-logind[1704]: Removed session 25. Mar 25 01:40:51.954762 kubelet[3373]: I0325 01:40:51.954717 3373 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583567bf-5944-4dbc-9bec-d4c11784752d" path="/var/lib/kubelet/pods/583567bf-5944-4dbc-9bec-d4c11784752d/volumes" Mar 25 01:40:51.955442 kubelet[3373]: I0325 01:40:51.955400 3373 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" path="/var/lib/kubelet/pods/abf9fc32-9588-4541-b62e-58efc1534cca/volumes" Mar 25 01:40:52.007414 systemd[1]: Started sshd@23-10.200.8.12:22-10.200.16.10:55036.service - OpenSSH per-connection server daemon (10.200.16.10:55036). Mar 25 01:40:52.643059 sshd[5061]: Accepted publickey for core from 10.200.16.10 port 55036 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:52.644738 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:52.649659 systemd-logind[1704]: New session 26 of user core. Mar 25 01:40:52.655439 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:40:53.040473 kubelet[3373]: E0325 01:40:53.040403 3373 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:40:53.391346 kubelet[3373]: I0325 01:40:53.391161 3373 topology_manager.go:215] "Topology Admit Handler" podUID="98b2798f-48cd-407d-b426-9dcf8b1b3cc7" podNamespace="kube-system" podName="cilium-c5sh2" Mar 25 01:40:53.391346 kubelet[3373]: E0325 01:40:53.391259 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" containerName="mount-cgroup" Mar 25 01:40:53.391346 kubelet[3373]: E0325 01:40:53.391274 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" containerName="apply-sysctl-overwrites" Mar 25 01:40:53.391346 kubelet[3373]: E0325 01:40:53.391303 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="583567bf-5944-4dbc-9bec-d4c11784752d" containerName="cilium-operator" Mar 25 01:40:53.391346 kubelet[3373]: E0325 01:40:53.391311 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" containerName="mount-bpf-fs" Mar 25 01:40:53.391346 kubelet[3373]: E0325 01:40:53.391321 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" containerName="clean-cilium-state" Mar 25 01:40:53.391346 kubelet[3373]: E0325 01:40:53.391329 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" containerName="cilium-agent" Mar 25 01:40:53.391711 kubelet[3373]: I0325 01:40:53.391377 3373 memory_manager.go:354] "RemoveStaleState removing state" podUID="583567bf-5944-4dbc-9bec-d4c11784752d" containerName="cilium-operator" Mar 25 01:40:53.391711 kubelet[3373]: I0325 01:40:53.391386 3373 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf9fc32-9588-4541-b62e-58efc1534cca" containerName="cilium-agent" Mar 25 01:40:53.403315 systemd[1]: Created slice kubepods-burstable-pod98b2798f_48cd_407d_b426_9dcf8b1b3cc7.slice - libcontainer container kubepods-burstable-pod98b2798f_48cd_407d_b426_9dcf8b1b3cc7.slice. Mar 25 01:40:53.496790 sshd[5063]: Connection closed by 10.200.16.10 port 55036 Mar 25 01:40:53.497747 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:53.502477 systemd[1]: sshd@23-10.200.8.12:22-10.200.16.10:55036.service: Deactivated successfully. Mar 25 01:40:53.505147 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:40:53.506043 systemd-logind[1704]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:40:53.507135 systemd-logind[1704]: Removed session 26. Mar 25 01:40:53.558759 kubelet[3373]: I0325 01:40:53.558662 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-hostproc\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.558759 kubelet[3373]: I0325 01:40:53.558733 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-host-proc-sys-net\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.558759 kubelet[3373]: I0325 01:40:53.558767 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-cilium-run\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559214 kubelet[3373]: I0325 01:40:53.558797 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbsn8\" (UniqueName: \"kubernetes.io/projected/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-kube-api-access-xbsn8\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559214 kubelet[3373]: I0325 01:40:53.558823 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-bpf-maps\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559214 kubelet[3373]: I0325 01:40:53.558847 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-cni-path\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559214 kubelet[3373]: I0325 01:40:53.558870 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-etc-cni-netd\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559214 kubelet[3373]: I0325 01:40:53.558899 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-xtables-lock\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559214 kubelet[3373]: I0325 01:40:53.558926 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-cilium-config-path\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559451 kubelet[3373]: I0325 01:40:53.558970 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-host-proc-sys-kernel\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559451 kubelet[3373]: I0325 01:40:53.558997 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-hubble-tls\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559451 kubelet[3373]: I0325 01:40:53.559047 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-cilium-ipsec-secrets\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559451 kubelet[3373]: I0325 01:40:53.559071 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-cilium-cgroup\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559451 kubelet[3373]: I0325 01:40:53.559101 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-lib-modules\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.559451 kubelet[3373]: I0325 01:40:53.559127 3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98b2798f-48cd-407d-b426-9dcf8b1b3cc7-clustermesh-secrets\") pod \"cilium-c5sh2\" (UID: \"98b2798f-48cd-407d-b426-9dcf8b1b3cc7\") " pod="kube-system/cilium-c5sh2" Mar 25 01:40:53.610006 systemd[1]: Started sshd@24-10.200.8.12:22-10.200.16.10:55038.service - OpenSSH per-connection server daemon (10.200.16.10:55038). Mar 25 01:40:53.708188 containerd[1729]: time="2025-03-25T01:40:53.708140294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5sh2,Uid:98b2798f-48cd-407d-b426-9dcf8b1b3cc7,Namespace:kube-system,Attempt:0,}" Mar 25 01:40:53.756034 containerd[1729]: time="2025-03-25T01:40:53.755943946Z" level=info msg="connecting to shim c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348" address="unix:///run/containerd/s/bc945a2beeffa32de45cc62d6f176e229b9ba08161ce84e162a3778500ff2dc7" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:40:53.781427 systemd[1]: Started cri-containerd-c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348.scope - libcontainer container c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348. Mar 25 01:40:53.813821 containerd[1729]: time="2025-03-25T01:40:53.813764730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5sh2,Uid:98b2798f-48cd-407d-b426-9dcf8b1b3cc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\"" Mar 25 01:40:53.816420 containerd[1729]: time="2025-03-25T01:40:53.816378538Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:40:53.834361 containerd[1729]: time="2025-03-25T01:40:53.834175995Z" level=info msg="Container 17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:53.846780 containerd[1729]: time="2025-03-25T01:40:53.846742434Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436\"" Mar 25 01:40:53.848491 containerd[1729]: time="2025-03-25T01:40:53.847213436Z" level=info msg="StartContainer for \"17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436\"" Mar 25 01:40:53.848491 containerd[1729]: time="2025-03-25T01:40:53.848075039Z" level=info msg="connecting to shim 17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436" address="unix:///run/containerd/s/bc945a2beeffa32de45cc62d6f176e229b9ba08161ce84e162a3778500ff2dc7" protocol=ttrpc version=3 Mar 25 01:40:53.868446 systemd[1]: Started cri-containerd-17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436.scope - libcontainer container 17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436. Mar 25 01:40:53.900523 containerd[1729]: time="2025-03-25T01:40:53.900397505Z" level=info msg="StartContainer for \"17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436\" returns successfully" Mar 25 01:40:53.906596 systemd[1]: cri-containerd-17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436.scope: Deactivated successfully. Mar 25 01:40:53.909955 containerd[1729]: time="2025-03-25T01:40:53.909835535Z" level=info msg="received exit event container_id:\"17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436\" id:\"17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436\" pid:5137 exited_at:{seconds:1742866853 nanos:909613234}" Mar 25 01:40:53.910181 containerd[1729]: time="2025-03-25T01:40:53.910029336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436\" id:\"17e73f1b20a095a7e048365a9bca4b4a1b5efe480c80226a75e1ecc45da79436\" pid:5137 exited_at:{seconds:1742866853 nanos:909613234}" Mar 25 01:40:54.245921 sshd[5073]: Accepted publickey for core from 10.200.16.10 port 55038 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:54.247893 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:54.255754 systemd-logind[1704]: New session 27 of user core. Mar 25 01:40:54.260457 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:40:54.364396 containerd[1729]: time="2025-03-25T01:40:54.364224280Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:40:54.383467 containerd[1729]: time="2025-03-25T01:40:54.383424642Z" level=info msg="Container 606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:54.397252 containerd[1729]: time="2025-03-25T01:40:54.397212985Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313\"" Mar 25 01:40:54.397788 containerd[1729]: time="2025-03-25T01:40:54.397751987Z" level=info msg="StartContainer for \"606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313\"" Mar 25 01:40:54.399049 containerd[1729]: time="2025-03-25T01:40:54.399021191Z" level=info msg="connecting to shim 606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313" address="unix:///run/containerd/s/bc945a2beeffa32de45cc62d6f176e229b9ba08161ce84e162a3778500ff2dc7" protocol=ttrpc version=3 Mar 25 01:40:54.421469 systemd[1]: Started cri-containerd-606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313.scope - libcontainer container 606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313. Mar 25 01:40:54.452099 containerd[1729]: time="2025-03-25T01:40:54.452047760Z" level=info msg="StartContainer for \"606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313\" returns successfully" Mar 25 01:40:54.456480 systemd[1]: cri-containerd-606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313.scope: Deactivated successfully. Mar 25 01:40:54.456879 containerd[1729]: time="2025-03-25T01:40:54.456642874Z" level=info msg="received exit event container_id:\"606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313\" id:\"606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313\" pid:5185 exited_at:{seconds:1742866854 nanos:456428774}" Mar 25 01:40:54.456944 containerd[1729]: time="2025-03-25T01:40:54.456921275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313\" id:\"606401ed54de43165c46cab462aadd029ab9622d8e20db557f83fc0e48840313\" pid:5185 exited_at:{seconds:1742866854 nanos:456428774}" Mar 25 01:40:54.686340 sshd[5171]: Connection closed by 10.200.16.10 port 55038 Mar 25 01:40:54.687070 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:54.691091 systemd[1]: sshd@24-10.200.8.12:22-10.200.16.10:55038.service: Deactivated successfully. Mar 25 01:40:54.693189 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:40:54.694045 systemd-logind[1704]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:40:54.695126 systemd-logind[1704]: Removed session 27. Mar 25 01:40:54.797593 systemd[1]: Started sshd@25-10.200.8.12:22-10.200.16.10:55048.service - OpenSSH per-connection server daemon (10.200.16.10:55048). Mar 25 01:40:55.368210 containerd[1729]: time="2025-03-25T01:40:55.368137374Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:40:55.397675 containerd[1729]: time="2025-03-25T01:40:55.395415060Z" level=info msg="Container 2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:55.411872 containerd[1729]: time="2025-03-25T01:40:55.411827513Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc\"" Mar 25 01:40:55.413131 containerd[1729]: time="2025-03-25T01:40:55.412469015Z" level=info msg="StartContainer for \"2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc\"" Mar 25 01:40:55.414224 containerd[1729]: time="2025-03-25T01:40:55.414190420Z" level=info msg="connecting to shim 2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc" address="unix:///run/containerd/s/bc945a2beeffa32de45cc62d6f176e229b9ba08161ce84e162a3778500ff2dc7" protocol=ttrpc version=3 Mar 25 01:40:55.427380 sshd[5221]: Accepted publickey for core from 10.200.16.10 port 55048 ssh2: RSA SHA256:yvM9aJCEcWMwwpyRstQ24Z65MqryworXgmyV3HoKOoA Mar 25 01:40:55.429545 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:55.437058 systemd-logind[1704]: New session 28 of user core. Mar 25 01:40:55.443443 systemd[1]: Started cri-containerd-2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc.scope - libcontainer container 2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc. Mar 25 01:40:55.444870 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 25 01:40:55.489944 systemd[1]: cri-containerd-2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc.scope: Deactivated successfully. Mar 25 01:40:55.492091 containerd[1729]: time="2025-03-25T01:40:55.492026868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc\" id:\"2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc\" pid:5238 exited_at:{seconds:1742866855 nanos:491680067}" Mar 25 01:40:55.492918 containerd[1729]: time="2025-03-25T01:40:55.492885270Z" level=info msg="received exit event container_id:\"2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc\" id:\"2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc\" pid:5238 exited_at:{seconds:1742866855 nanos:491680067}" Mar 25 01:40:55.495467 containerd[1729]: time="2025-03-25T01:40:55.495389678Z" level=info msg="StartContainer for \"2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc\" returns successfully" Mar 25 01:40:55.515045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d7f0e3492e89b6e881346c6841c05b88f3106c4cfbd9ebdbff044b74f58d3dc-rootfs.mount: Deactivated successfully. Mar 25 01:40:56.373656 containerd[1729]: time="2025-03-25T01:40:56.373542572Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:40:56.396193 containerd[1729]: time="2025-03-25T01:40:56.395396641Z" level=info msg="Container 9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:56.410878 containerd[1729]: time="2025-03-25T01:40:56.410841890Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0\"" Mar 25 01:40:56.411625 containerd[1729]: time="2025-03-25T01:40:56.411336092Z" level=info msg="StartContainer for \"9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0\"" Mar 25 01:40:56.412316 containerd[1729]: time="2025-03-25T01:40:56.412270995Z" level=info msg="connecting to shim 9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0" address="unix:///run/containerd/s/bc945a2beeffa32de45cc62d6f176e229b9ba08161ce84e162a3778500ff2dc7" protocol=ttrpc version=3 Mar 25 01:40:56.435695 systemd[1]: Started cri-containerd-9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0.scope - libcontainer container 9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0. Mar 25 01:40:56.463144 systemd[1]: cri-containerd-9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0.scope: Deactivated successfully. Mar 25 01:40:56.464102 containerd[1729]: time="2025-03-25T01:40:56.463511258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0\" id:\"9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0\" pid:5281 exited_at:{seconds:1742866856 nanos:463037656}" Mar 25 01:40:56.467612 containerd[1729]: time="2025-03-25T01:40:56.467042769Z" level=info msg="received exit event container_id:\"9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0\" id:\"9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0\" pid:5281 exited_at:{seconds:1742866856 nanos:463037656}" Mar 25 01:40:56.474267 containerd[1729]: time="2025-03-25T01:40:56.474237492Z" level=info msg="StartContainer for \"9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0\" returns successfully" Mar 25 01:40:56.490165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e7423437f63c31669361520b7a458425364e2a4c9473828c5fac66c34c49dc0-rootfs.mount: Deactivated successfully. Mar 25 01:40:57.381027 containerd[1729]: time="2025-03-25T01:40:57.380195611Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:40:57.436973 containerd[1729]: time="2025-03-25T01:40:57.435918470Z" level=info msg="Container d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:57.452650 containerd[1729]: time="2025-03-25T01:40:57.452613807Z" level=info msg="CreateContainer within sandbox \"c50fa84d331c486c5c03caf58e21e92fdfa9147cb97471cd0a9f25fdcb2ed348\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\"" Mar 25 01:40:57.454466 containerd[1729]: time="2025-03-25T01:40:57.453196112Z" level=info msg="StartContainer for \"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\"" Mar 25 01:40:57.454466 containerd[1729]: time="2025-03-25T01:40:57.454185820Z" level=info msg="connecting to shim d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d" address="unix:///run/containerd/s/bc945a2beeffa32de45cc62d6f176e229b9ba08161ce84e162a3778500ff2dc7" protocol=ttrpc version=3 Mar 25 01:40:57.476430 systemd[1]: Started cri-containerd-d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d.scope - libcontainer container d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d. Mar 25 01:40:57.513574 containerd[1729]: time="2025-03-25T01:40:57.513519908Z" level=info msg="StartContainer for \"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\" returns successfully" Mar 25 01:40:57.598112 containerd[1729]: time="2025-03-25T01:40:57.598065304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\" id:\"1fce8c055be3e6d98bd2da39f0a65ed0c0c74851b9b27ecb666cd427756023d1\" pid:5348 exited_at:{seconds:1742866857 nanos:597523799}" Mar 25 01:40:58.013382 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 25 01:40:58.398248 kubelet[3373]: I0325 01:40:58.398091 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c5sh2" podStartSLOduration=5.398058885 podStartE2EDuration="5.398058885s" podCreationTimestamp="2025-03-25 01:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:40:58.397932884 +0000 UTC m=+170.536680508" watchObservedRunningTime="2025-03-25 01:40:58.398058885 +0000 UTC m=+170.536806509" Mar 25 01:41:00.032885 containerd[1729]: time="2025-03-25T01:41:00.032817909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\" id:\"18bcb99e0c1a18745ae8effaaa01de05d9d9fa44776abb2416ac1d4cb6932120\" pid:5562 exit_status:1 exited_at:{seconds:1742866860 nanos:32349305}" Mar 25 01:41:00.827528 systemd-networkd[1560]: lxc_health: Link UP Mar 25 01:41:00.836209 systemd-networkd[1560]: lxc_health: Gained carrier Mar 25 01:41:02.095499 systemd-networkd[1560]: lxc_health: Gained IPv6LL Mar 25 01:41:02.223355 containerd[1729]: time="2025-03-25T01:41:02.223293257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\" id:\"0fe9370118b2e8698e8487c0861bd10f42c748891769a79dbc0286d0bf6827c1\" pid:5915 exited_at:{seconds:1742866862 nanos:222836053}" Mar 25 01:41:04.422310 containerd[1729]: time="2025-03-25T01:41:04.420778073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\" id:\"b0d5a86dd094ec9c1302102bbb2651217b5b43a52a021c21709472510aa41df4\" pid:5945 exited_at:{seconds:1742866864 nanos:420350469}" Mar 25 01:41:06.532435 containerd[1729]: time="2025-03-25T01:41:06.532196655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d8096443dad8bc64aa3a05742bca54d273db1262fed47b42dd5e9427b2256d\" id:\"390fc161844327fe85d257c10e322bbdfaae8143ca25e1bfeeed6e0ba3138e57\" pid:5975 exited_at:{seconds:1742866866 nanos:531654749}" Mar 25 01:41:06.640771 sshd[5244]: Connection closed by 10.200.16.10 port 55048 Mar 25 01:41:06.641845 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Mar 25 01:41:06.645649 systemd[1]: sshd@25-10.200.8.12:22-10.200.16.10:55048.service: Deactivated successfully. Mar 25 01:41:06.648205 systemd[1]: session-28.scope: Deactivated successfully. Mar 25 01:41:06.649876 systemd-logind[1704]: Session 28 logged out. Waiting for processes to exit. Mar 25 01:41:06.650921 systemd-logind[1704]: Removed session 28. Mar 25 01:41:07.948704 containerd[1729]: time="2025-03-25T01:41:07.948659619Z" level=info msg="StopPodSandbox for \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\"" Mar 25 01:41:07.949181 containerd[1729]: time="2025-03-25T01:41:07.948835521Z" level=info msg="TearDown network for sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" successfully" Mar 25 01:41:07.949181 containerd[1729]: time="2025-03-25T01:41:07.948859021Z" level=info msg="StopPodSandbox for \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" returns successfully" Mar 25 01:41:07.949451 containerd[1729]: time="2025-03-25T01:41:07.949340125Z" level=info msg="RemovePodSandbox for \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\"" Mar 25 01:41:07.949451 containerd[1729]: time="2025-03-25T01:41:07.949376125Z" level=info msg="Forcibly stopping sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\"" Mar 25 01:41:07.949604 containerd[1729]: time="2025-03-25T01:41:07.949489126Z" level=info msg="TearDown network for sandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" successfully" Mar 25 01:41:07.950715 containerd[1729]: time="2025-03-25T01:41:07.950686336Z" level=info msg="Ensure that sandbox 4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1 in task-service has been cleanup successfully" Mar 25 01:41:07.958728 containerd[1729]: time="2025-03-25T01:41:07.958700202Z" level=info msg="RemovePodSandbox \"4f28ce44484a9318aeaa3c2ad45aab9b5bd61278dca030fc35790349af8284b1\" returns successfully" Mar 25 01:41:07.959067 containerd[1729]: time="2025-03-25T01:41:07.959037005Z" level=info msg="StopPodSandbox for \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\"" Mar 25 01:41:07.959231 containerd[1729]: time="2025-03-25T01:41:07.959165006Z" level=info msg="TearDown network for sandbox \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" successfully" Mar 25 01:41:07.959231 containerd[1729]: time="2025-03-25T01:41:07.959188906Z" level=info msg="StopPodSandbox for \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" returns successfully" Mar 25 01:41:07.959503 containerd[1729]: time="2025-03-25T01:41:07.959470308Z" level=info msg="RemovePodSandbox for \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\"" Mar 25 01:41:07.959557 containerd[1729]: time="2025-03-25T01:41:07.959500908Z" level=info msg="Forcibly stopping sandbox \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\"" Mar 25 01:41:07.959611 containerd[1729]: time="2025-03-25T01:41:07.959591309Z" level=info msg="TearDown network for sandbox \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" successfully" Mar 25 01:41:07.960625 containerd[1729]: time="2025-03-25T01:41:07.960599117Z" level=info msg="Ensure that sandbox d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55 in task-service has been cleanup successfully" Mar 25 01:41:07.970580 containerd[1729]: time="2025-03-25T01:41:07.969876793Z" level=info msg="RemovePodSandbox \"d6ddbb720ba58f65358549b26e50f44fb0f9902184ecc305b2c7ce68e10bdc55\" returns successfully"