Feb 13 15:37:53.053508 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:37:53.053545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:37:53.053561 kernel: BIOS-provided physical RAM map: Feb 13 15:37:53.053572 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:37:53.053582 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 15:37:53.053593 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 15:37:53.053607 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 15:37:53.053622 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 15:37:53.053656 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 15:37:53.053668 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 15:37:53.053680 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 15:37:53.053691 kernel: printk: bootconsole [earlyser0] enabled Feb 13 15:37:53.053702 kernel: NX (Execute Disable) protection: active Feb 13 15:37:53.053714 kernel: APIC: Static calls initialized Feb 13 15:37:53.053731 kernel: efi: EFI v2.7 by Microsoft Feb 13 15:37:53.053744 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Feb 13 15:37:53.053757 kernel: random: crng init done Feb 13 15:37:53.053769 kernel: secureboot: Secure boot disabled Feb 13 15:37:53.053781 kernel: SMBIOS 3.1.0 present. Feb 13 15:37:53.053794 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 15:37:53.053806 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 15:37:53.053819 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 15:37:53.053832 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 15:37:53.053845 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 15:37:53.053860 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 15:37:53.053873 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 15:37:53.053886 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:37:53.053899 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:37:53.053913 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 15:37:53.053926 kernel: tsc: Detected 2593.908 MHz processor Feb 13 15:37:53.053939 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:37:53.053953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:37:53.053966 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 15:37:53.053982 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:37:53.053995 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:37:53.054008 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 15:37:53.054020 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 15:37:53.054033 kernel: Using GB pages for direct mapping Feb 13 15:37:53.054046 kernel: ACPI: Early table checksum verification disabled Feb 13 15:37:53.054059 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 15:37:53.054078 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054095 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054109 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 15:37:53.054122 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 15:37:53.054136 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054151 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054165 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054181 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054196 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054210 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054224 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054237 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 15:37:53.054251 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 15:37:53.054265 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 15:37:53.054279 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 15:37:53.054293 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 15:37:53.054309 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 15:37:53.054323 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 15:37:53.054336 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 15:37:53.054350 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 15:37:53.054364 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 15:37:53.054378 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:37:53.054392 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:37:53.054406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 15:37:53.054420 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 15:37:53.054436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 15:37:53.054450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 15:37:53.054464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 15:37:53.054478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 15:37:53.054492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 15:37:53.054506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 15:37:53.054519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 15:37:53.054534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 15:37:53.054551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 15:37:53.054565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 15:37:53.054579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 15:37:53.054592 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 15:37:53.054606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 15:37:53.054620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 15:37:53.054656 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 15:37:53.054668 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 15:37:53.054681 kernel: Zone ranges: Feb 13 15:37:53.054698 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:37:53.054712 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:37:53.054726 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:37:53.054740 kernel: Movable zone start for each node Feb 13 15:37:53.054754 kernel: Early memory node ranges Feb 13 15:37:53.054766 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:37:53.054778 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 15:37:53.054790 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 15:37:53.054802 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:37:53.054818 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 15:37:53.054832 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:37:53.054845 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:37:53.054858 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 15:37:53.054869 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 15:37:53.054880 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 15:37:53.054890 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:37:53.054903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:37:53.054916 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:37:53.054930 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 15:37:53.054943 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:37:53.054955 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 15:37:53.054967 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 15:37:53.054981 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:37:53.054993 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:37:53.055010 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:37:53.055024 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:37:53.055038 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:37:53.055056 kernel: Hyper-V: PV spinlocks enabled Feb 13 15:37:53.055070 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:37:53.055086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:37:53.055101 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:37:53.055116 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:37:53.055129 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:37:53.055140 kernel: Fallback order for Node 0: 0 Feb 13 15:37:53.055152 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 15:37:53.055167 kernel: Policy zone: Normal Feb 13 15:37:53.055190 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:37:53.055204 kernel: software IO TLB: area num 2. Feb 13 15:37:53.055221 kernel: Memory: 8069620K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 317584K reserved, 0K cma-reserved) Feb 13 15:37:53.055236 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:37:53.055250 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:37:53.055265 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:37:53.055281 kernel: Dynamic Preempt: voluntary Feb 13 15:37:53.055297 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:37:53.055314 kernel: rcu: RCU event tracing is enabled. Feb 13 15:37:53.055329 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:37:53.055348 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:37:53.055365 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:37:53.055381 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:37:53.055396 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:37:53.055409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:37:53.055424 kernel: Using NULL legacy PIC Feb 13 15:37:53.055443 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 15:37:53.055456 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:37:53.055469 kernel: Console: colour dummy device 80x25 Feb 13 15:37:53.055482 kernel: printk: console [tty1] enabled Feb 13 15:37:53.055496 kernel: printk: console [ttyS0] enabled Feb 13 15:37:53.055509 kernel: printk: bootconsole [earlyser0] disabled Feb 13 15:37:53.055522 kernel: ACPI: Core revision 20230628 Feb 13 15:37:53.055535 kernel: Failed to register legacy timer interrupt Feb 13 15:37:53.055548 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:37:53.055564 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:37:53.055576 kernel: Hyper-V: Using IPI hypercalls Feb 13 15:37:53.055585 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 15:37:53.055596 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 15:37:53.055607 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 15:37:53.055616 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 15:37:53.055638 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 15:37:53.055650 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 15:37:53.055661 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Feb 13 15:37:53.055676 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:37:53.055685 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:37:53.055696 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:37:53.055705 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:37:53.055715 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:37:53.055723 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:37:53.055734 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:37:53.055743 kernel: RETBleed: Vulnerable Feb 13 15:37:53.055753 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:37:53.055761 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:37:53.055774 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:37:53.055784 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:37:53.055793 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:37:53.055802 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:37:53.055812 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:37:53.055821 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:37:53.055834 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:37:53.055842 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:37:53.055854 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 15:37:53.055862 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 15:37:53.055873 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 15:37:53.055884 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 15:37:53.055895 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:37:53.055903 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:37:53.055914 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:37:53.055922 kernel: landlock: Up and running. Feb 13 15:37:53.055933 kernel: SELinux: Initializing. Feb 13 15:37:53.055943 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.055953 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.055962 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:37:53.055972 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:37:53.055981 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:37:53.055994 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:37:53.056005 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:37:53.056013 kernel: signal: max sigframe size: 3632 Feb 13 15:37:53.056024 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:37:53.056033 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:37:53.056044 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:37:53.056053 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:37:53.056063 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:37:53.056072 kernel: .... node #0, CPUs: #1 Feb 13 15:37:53.056086 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 15:37:53.056096 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:37:53.056107 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:37:53.056115 kernel: smpboot: Max logical packages: 1 Feb 13 15:37:53.056126 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Feb 13 15:37:53.056134 kernel: devtmpfs: initialized Feb 13 15:37:53.056145 kernel: x86/mm: Memory block size: 128MB Feb 13 15:37:53.056153 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 15:37:53.056166 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:37:53.056175 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:37:53.056186 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:37:53.056194 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:37:53.056205 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:37:53.056213 kernel: audit: type=2000 audit(1739461071.027:1): state=initialized audit_enabled=0 res=1 Feb 13 15:37:53.056224 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:37:53.056232 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:37:53.056243 kernel: cpuidle: using governor menu Feb 13 15:37:53.056255 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:37:53.056265 kernel: dca service started, version 1.12.1 Feb 13 15:37:53.056277 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 15:37:53.056285 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:37:53.056296 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:37:53.056304 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:37:53.056315 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:37:53.056324 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:37:53.056335 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:37:53.056345 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:37:53.056356 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:37:53.056365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:37:53.056375 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:37:53.056383 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:37:53.056394 kernel: ACPI: Interpreter enabled Feb 13 15:37:53.056402 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:37:53.056414 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:37:53.056422 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:37:53.056435 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:37:53.056444 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 15:37:53.056454 kernel: iommu: Default domain type: Translated Feb 13 15:37:53.056464 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:37:53.056474 kernel: efivars: Registered efivars operations Feb 13 15:37:53.056483 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:37:53.056493 kernel: PCI: System does not support PCI Feb 13 15:37:53.056501 kernel: vgaarb: loaded Feb 13 15:37:53.056512 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 15:37:53.056524 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:37:53.056534 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:37:53.056542 kernel: pnp: PnP ACPI init Feb 13 15:37:53.056553 kernel: pnp: PnP ACPI: found 3 devices Feb 13 15:37:53.056562 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:37:53.056572 kernel: NET: Registered PF_INET protocol family Feb 13 15:37:53.056581 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:37:53.056592 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:37:53.056603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:37:53.056614 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:37:53.056625 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:37:53.056642 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:37:53.056650 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.056662 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.056670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:37:53.056681 kernel: NET: Registered PF_XDP protocol family Feb 13 15:37:53.056689 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:37:53.056701 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:37:53.056712 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Feb 13 15:37:53.056723 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:37:53.056732 kernel: Initialise system trusted keyrings Feb 13 15:37:53.056742 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:37:53.056750 kernel: Key type asymmetric registered Feb 13 15:37:53.056761 kernel: Asymmetric key parser 'x509' registered Feb 13 15:37:53.056769 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:37:53.056780 kernel: io scheduler mq-deadline registered Feb 13 15:37:53.056788 kernel: io scheduler kyber registered Feb 13 15:37:53.056801 kernel: io scheduler bfq registered Feb 13 15:37:53.056810 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:37:53.056821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:37:53.056829 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:37:53.056840 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:37:53.056849 kernel: i8042: PNP: No PS/2 controller found. Feb 13 15:37:53.057001 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 15:37:53.057099 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T15:37:52 UTC (1739461072) Feb 13 15:37:53.057195 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 15:37:53.057209 kernel: intel_pstate: CPU model not supported Feb 13 15:37:53.057217 kernel: efifb: probing for efifb Feb 13 15:37:53.057228 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:37:53.057237 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:37:53.057248 kernel: efifb: scrolling: redraw Feb 13 15:37:53.057259 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:37:53.057268 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:37:53.057281 kernel: fb0: EFI VGA frame buffer device Feb 13 15:37:53.057289 kernel: pstore: Using crash dump compression: deflate Feb 13 15:37:53.057300 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:37:53.057308 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:37:53.057319 kernel: Segment Routing with IPv6 Feb 13 15:37:53.057327 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:37:53.057339 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:37:53.057347 kernel: Key type dns_resolver registered Feb 13 15:37:53.057358 kernel: IPI shorthand broadcast: enabled Feb 13 15:37:53.057366 kernel: sched_clock: Marking stable (790009600, 38683000)->(1013154400, -184461800) Feb 13 15:37:53.057380 kernel: registered taskstats version 1 Feb 13 15:37:53.057391 kernel: Loading compiled-in X.509 certificates Feb 13 15:37:53.057402 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:37:53.057447 kernel: Key type .fscrypt registered Feb 13 15:37:53.057469 kernel: Key type fscrypt-provisioning registered Feb 13 15:37:53.057489 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:37:53.057506 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:37:53.057522 kernel: ima: No architecture policies found Feb 13 15:37:53.057548 kernel: clk: Disabling unused clocks Feb 13 15:37:53.057562 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:37:53.057578 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:37:53.057593 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:37:53.057609 kernel: Run /init as init process Feb 13 15:37:53.057753 kernel: with arguments: Feb 13 15:37:53.057776 kernel: /init Feb 13 15:37:53.057791 kernel: with environment: Feb 13 15:37:53.057807 kernel: HOME=/ Feb 13 15:37:53.057821 kernel: TERM=linux Feb 13 15:37:53.057841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:37:53.057861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:37:53.057883 systemd[1]: Detected virtualization microsoft. Feb 13 15:37:53.057901 systemd[1]: Detected architecture x86-64. Feb 13 15:37:53.057919 systemd[1]: Running in initrd. Feb 13 15:37:53.057935 systemd[1]: No hostname configured, using default hostname. Feb 13 15:37:53.057954 systemd[1]: Hostname set to . Feb 13 15:37:53.057977 systemd[1]: Initializing machine ID from random generator. Feb 13 15:37:53.057993 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:37:53.058012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:37:53.058028 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:37:53.058048 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:37:53.058069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:37:53.058093 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:37:53.058111 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:37:53.058140 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:37:53.058155 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:37:53.058169 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:37:53.058187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:37:53.058200 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:37:53.058213 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:37:53.058226 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:37:53.058245 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:37:53.058259 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:37:53.058275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:37:53.058292 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:37:53.058309 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:37:53.058324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:37:53.058339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:37:53.058353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:37:53.058371 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:37:53.058387 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:37:53.058409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:37:53.058424 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:37:53.058437 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:37:53.058451 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:37:53.058466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:37:53.058508 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 15:37:53.058547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:53.058563 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:37:53.058580 systemd-journald[177]: Journal started Feb 13 15:37:53.058616 systemd-journald[177]: Runtime Journal (/run/log/journal/f63a0330c1ac4cab964cc07a7bacedcd) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:37:53.066061 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:37:53.066528 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:37:53.067272 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:37:53.081431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:37:53.090262 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 15:37:53.092802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:37:53.100689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:53.103918 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:37:53.122317 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:37:53.138043 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:37:53.136816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:37:53.137170 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:37:53.156097 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 15:37:53.158247 kernel: Bridge firewalling registered Feb 13 15:37:53.158845 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:37:53.161933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:37:53.172861 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:37:53.184821 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:37:53.189545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:37:53.202683 dracut-cmdline[207]: dracut-dracut-053 Feb 13 15:37:53.203143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:53.211898 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:37:53.224897 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:37:53.274733 systemd-resolved[221]: Positive Trust Anchors: Feb 13 15:37:53.274748 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:37:53.274806 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:37:53.297420 systemd-resolved[221]: Defaulting to hostname 'linux'. Feb 13 15:37:53.305161 kernel: SCSI subsystem initialized Feb 13 15:37:53.298660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:37:53.307834 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:37:53.315649 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:37:53.326648 kernel: iscsi: registered transport (tcp) Feb 13 15:37:53.347399 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:37:53.347467 kernel: QLogic iSCSI HBA Driver Feb 13 15:37:53.382932 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:37:53.391808 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:37:53.420214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:37:53.420319 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:37:53.423133 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:37:53.463655 kernel: raid6: avx512x4 gen() 18547 MB/s Feb 13 15:37:53.483648 kernel: raid6: avx512x2 gen() 18459 MB/s Feb 13 15:37:53.502652 kernel: raid6: avx512x1 gen() 18503 MB/s Feb 13 15:37:53.521638 kernel: raid6: avx2x4 gen() 18350 MB/s Feb 13 15:37:53.540647 kernel: raid6: avx2x2 gen() 18373 MB/s Feb 13 15:37:53.560201 kernel: raid6: avx2x1 gen() 14003 MB/s Feb 13 15:37:53.560245 kernel: raid6: using algorithm avx512x4 gen() 18547 MB/s Feb 13 15:37:53.581023 kernel: raid6: .... xor() 6955 MB/s, rmw enabled Feb 13 15:37:53.581075 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:37:53.602655 kernel: xor: automatically using best checksumming function avx Feb 13 15:37:53.749657 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:37:53.759480 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:37:53.769809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:37:53.783570 systemd-udevd[396]: Using default interface naming scheme 'v255'. Feb 13 15:37:53.788187 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:37:53.801790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:37:53.813940 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 13 15:37:53.842414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:37:53.851787 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:37:53.892376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:37:53.907906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:37:53.932339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:37:53.942566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:37:53.948739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:37:53.954100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:37:53.964195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:37:53.981650 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:37:54.000151 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:37:54.008942 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:37:54.008972 kernel: AES CTR mode by8 optimization enabled Feb 13 15:37:54.017116 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 15:37:54.017888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:37:54.018122 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:37:54.029931 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:37:54.035600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:37:54.035878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:54.038770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:54.058712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:54.069079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:37:54.078336 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:37:54.078368 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:37:54.069236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:54.083650 kernel: PTP clock support registered Feb 13 15:37:54.086849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:54.094900 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:37:54.101409 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:37:54.101456 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:37:54.114588 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:37:54.114667 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:37:54.116418 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:37:54.118496 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:37:54.120384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:54.227098 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:37:54.227125 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:37:54.223283 systemd-resolved[221]: Clock change detected. Flushing caches. Feb 13 15:37:54.241377 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:37:54.254700 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:37:54.264836 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:37:54.264892 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:37:54.264912 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:37:54.273505 kernel: scsi host0: storvsc_host_t Feb 13 15:37:54.273751 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:37:54.278646 kernel: scsi host1: storvsc_host_t Feb 13 15:37:54.282663 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:37:54.285662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:37:54.305401 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:37:54.307660 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:37:54.307683 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:37:54.319887 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:37:54.335442 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:37:54.335681 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:37:54.335865 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:37:54.336028 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:37:54.336205 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:54.336227 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:37:54.427358 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: VF slot 1 added Feb 13 15:37:54.438452 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:37:54.438523 kernel: hv_pci fbfc5ecf-0ffd-4390-81a9-4eae4d852e84: PCI VMBus probing: Using version 0x10004 Feb 13 15:37:54.479049 kernel: hv_pci fbfc5ecf-0ffd-4390-81a9-4eae4d852e84: PCI host bridge to bus 0ffd:00 Feb 13 15:37:54.479631 kernel: pci_bus 0ffd:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 15:37:54.479812 kernel: pci_bus 0ffd:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:37:54.479958 kernel: pci 0ffd:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 15:37:54.480158 kernel: pci 0ffd:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:37:54.480325 kernel: pci 0ffd:00:02.0: enabling Extended Tags Feb 13 15:37:54.480494 kernel: pci 0ffd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0ffd:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 15:37:54.480681 kernel: pci_bus 0ffd:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:37:54.480832 kernel: pci 0ffd:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:37:54.643935 kernel: mlx5_core 0ffd:00:02.0: enabling device (0000 -> 0002) Feb 13 15:37:54.865911 kernel: mlx5_core 0ffd:00:02.0: firmware version: 14.30.5000 Feb 13 15:37:54.866152 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: VF registering: eth1 Feb 13 15:37:54.866333 kernel: mlx5_core 0ffd:00:02.0 eth1: joined to eth0 Feb 13 15:37:54.866521 kernel: mlx5_core 0ffd:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 15:37:54.825010 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:37:54.874634 kernel: mlx5_core 0ffd:00:02.0 enP4093s1: renamed from eth1 Feb 13 15:37:54.898637 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Feb 13 15:37:54.905452 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:37:54.923528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:37:54.935734 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Feb 13 15:37:54.950296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:37:54.953371 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:37:54.969809 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:37:54.986677 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:54.996636 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:56.005491 disk-uuid[602]: The operation has completed successfully. Feb 13 15:37:56.009600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:56.084766 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:37:56.084884 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:37:56.105767 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:37:56.111106 sh[688]: Success Feb 13 15:37:56.142396 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:37:56.335647 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:37:56.348737 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:37:56.353242 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:37:56.368629 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:37:56.368678 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:56.373512 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:37:56.376309 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:37:56.378523 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:37:56.648090 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:37:56.653282 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:37:56.661791 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:37:56.669799 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:37:56.684543 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:56.684595 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:56.684648 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:37:56.702629 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:37:56.716511 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:37:56.718602 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:56.727240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:37:56.736803 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:37:56.760583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:37:56.769896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:37:56.788701 systemd-networkd[872]: lo: Link UP Feb 13 15:37:56.788710 systemd-networkd[872]: lo: Gained carrier Feb 13 15:37:56.790826 systemd-networkd[872]: Enumeration completed Feb 13 15:37:56.790907 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:37:56.794046 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:37:56.794050 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:37:56.794904 systemd[1]: Reached target network.target - Network. Feb 13 15:37:56.855638 kernel: mlx5_core 0ffd:00:02.0 enP4093s1: Link up Feb 13 15:37:56.888639 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: Data path switched to VF: enP4093s1 Feb 13 15:37:56.889199 systemd-networkd[872]: enP4093s1: Link UP Feb 13 15:37:56.891566 systemd-networkd[872]: eth0: Link UP Feb 13 15:37:56.891869 systemd-networkd[872]: eth0: Gained carrier Feb 13 15:37:56.891886 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:37:56.894860 systemd-networkd[872]: enP4093s1: Gained carrier Feb 13 15:37:56.914654 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:37:57.535542 ignition[835]: Ignition 2.20.0 Feb 13 15:37:57.535555 ignition[835]: Stage: fetch-offline Feb 13 15:37:57.537145 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:37:57.535603 ignition[835]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.535627 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.535749 ignition[835]: parsed url from cmdline: "" Feb 13 15:37:57.535754 ignition[835]: no config URL provided Feb 13 15:37:57.535761 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:37:57.535771 ignition[835]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:37:57.535780 ignition[835]: failed to fetch config: resource requires networking Feb 13 15:37:57.536027 ignition[835]: Ignition finished successfully Feb 13 15:37:57.562798 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:37:57.577648 ignition[880]: Ignition 2.20.0 Feb 13 15:37:57.577659 ignition[880]: Stage: fetch Feb 13 15:37:57.577871 ignition[880]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.577884 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.578009 ignition[880]: parsed url from cmdline: "" Feb 13 15:37:57.578014 ignition[880]: no config URL provided Feb 13 15:37:57.578020 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:37:57.578030 ignition[880]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:37:57.578056 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:37:57.652215 ignition[880]: GET result: OK Feb 13 15:37:57.652350 ignition[880]: config has been read from IMDS userdata Feb 13 15:37:57.652389 ignition[880]: parsing config with SHA512: 301cf5957c45b04c89ba8f1c490fa91bed39f4e83efb43d7ca6cda35644e157c009f22433f2a6f7d1b5d6b00089dda85506f79d525e200895bcc6e1e5486b340 Feb 13 15:37:57.660482 unknown[880]: fetched base config from "system" Feb 13 15:37:57.660502 unknown[880]: fetched base config from "system" Feb 13 15:37:57.661219 ignition[880]: fetch: fetch complete Feb 13 15:37:57.660512 unknown[880]: fetched user config from "azure" Feb 13 15:37:57.661225 ignition[880]: fetch: fetch passed Feb 13 15:37:57.663067 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:37:57.661278 ignition[880]: Ignition finished successfully Feb 13 15:37:57.676829 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:37:57.691533 ignition[886]: Ignition 2.20.0 Feb 13 15:37:57.691544 ignition[886]: Stage: kargs Feb 13 15:37:57.691782 ignition[886]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.694848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:37:57.691796 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.692718 ignition[886]: kargs: kargs passed Feb 13 15:37:57.692766 ignition[886]: Ignition finished successfully Feb 13 15:37:57.705755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:37:57.720724 ignition[892]: Ignition 2.20.0 Feb 13 15:37:57.720735 ignition[892]: Stage: disks Feb 13 15:37:57.722729 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:37:57.720951 ignition[892]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.720965 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.721843 ignition[892]: disks: disks passed Feb 13 15:37:57.721886 ignition[892]: Ignition finished successfully Feb 13 15:37:57.735016 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:37:57.737723 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:37:57.742642 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:37:57.742739 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:37:57.743086 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:37:57.756877 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:37:57.836689 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:37:57.842260 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:37:57.853089 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:37:57.943631 kernel: EXT4-fs (sda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:37:57.944145 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:37:57.948645 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:37:57.991732 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:37:57.996342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:37:58.001117 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:37:58.009727 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (911) Feb 13 15:37:58.010233 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:37:58.010401 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:37:58.026837 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:58.026899 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:58.026924 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:37:58.029059 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:37:58.036629 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:37:58.037006 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:37:58.042371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:37:58.596120 systemd-networkd[872]: enP4093s1: Gained IPv6LL Feb 13 15:37:58.642535 coreos-metadata[913]: Feb 13 15:37:58.642 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:37:58.646781 coreos-metadata[913]: Feb 13 15:37:58.645 INFO Fetch successful Feb 13 15:37:58.646781 coreos-metadata[913]: Feb 13 15:37:58.645 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:37:58.658723 coreos-metadata[913]: Feb 13 15:37:58.658 INFO Fetch successful Feb 13 15:37:58.677146 coreos-metadata[913]: Feb 13 15:37:58.677 INFO wrote hostname ci-4152.2.1-a-02a9d39241 to /sysroot/etc/hostname Feb 13 15:37:58.679668 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:37:58.693179 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:37:58.727208 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:37:58.732155 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:37:58.751061 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:37:58.784841 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 15:37:59.763801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:37:59.772708 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:37:59.779790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:37:59.788673 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:59.789681 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:37:59.817470 ignition[1031]: INFO : Ignition 2.20.0 Feb 13 15:37:59.817470 ignition[1031]: INFO : Stage: mount Feb 13 15:37:59.821805 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:59.821805 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:59.827715 ignition[1031]: INFO : mount: mount passed Feb 13 15:37:59.829455 ignition[1031]: INFO : Ignition finished successfully Feb 13 15:37:59.828644 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:37:59.837077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:37:59.843721 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:37:59.854796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:37:59.868765 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1044) Feb 13 15:37:59.868820 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:59.871503 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:59.873694 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:37:59.878851 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:37:59.880268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:37:59.902592 ignition[1061]: INFO : Ignition 2.20.0 Feb 13 15:37:59.902592 ignition[1061]: INFO : Stage: files Feb 13 15:37:59.906399 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:59.906399 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:59.906399 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:37:59.937583 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:37:59.937583 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:38:00.016902 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:00.017439 unknown[1061]: wrote ssh authorized keys file for user: core Feb 13 15:38:00.096069 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:38:00.544369 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:00.549530 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:00.549530 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:01.048862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:38:01.166887 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:01.171770 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:01.171770 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:01.171770 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:38:01.543744 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:38:01.799138 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.799138 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:38:01.807792 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:01.812306 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:01.812306 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:38:01.819385 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:01.822533 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:01.825919 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:01.829666 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:01.833451 ignition[1061]: INFO : files: files passed Feb 13 15:38:01.835147 ignition[1061]: INFO : Ignition finished successfully Feb 13 15:38:01.834650 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:38:01.846859 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:38:01.852567 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:38:01.855471 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:38:01.855562 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:38:01.874419 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:01.874419 initrd-setup-root-after-ignition[1089]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:01.884325 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:01.877942 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:01.881179 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:38:01.896837 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:38:01.920966 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:38:01.921091 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:38:01.925990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:38:01.933018 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:38:01.933542 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:38:01.944830 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:38:01.959233 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:01.967785 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:38:01.978907 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:01.981662 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:01.989018 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:38:01.991539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:38:01.993917 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:02.000670 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:38:02.003037 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:38:02.008952 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:38:02.011280 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:02.018770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:02.021516 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:38:02.027952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:02.030884 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:38:02.037760 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:38:02.040366 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:38:02.044104 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:38:02.044277 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:02.048689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:02.052658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:02.062650 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:38:02.065796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:02.068770 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:38:02.068955 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:02.077928 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:38:02.083256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:02.088252 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:38:02.088387 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:38:02.099392 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:38:02.099556 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:38:02.111844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:38:02.114048 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:38:02.114243 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:02.128907 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:38:02.130940 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:38:02.138529 ignition[1113]: INFO : Ignition 2.20.0 Feb 13 15:38:02.138529 ignition[1113]: INFO : Stage: umount Feb 13 15:38:02.138529 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:02.138529 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:38:02.138529 ignition[1113]: INFO : umount: umount passed Feb 13 15:38:02.138529 ignition[1113]: INFO : Ignition finished successfully Feb 13 15:38:02.131129 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:02.138464 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:38:02.138655 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:02.152714 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:38:02.154640 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:38:02.158500 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:38:02.158588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:38:02.167395 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:38:02.167450 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:38:02.171769 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:38:02.173960 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:38:02.181169 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:38:02.181228 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:38:02.189869 systemd[1]: Stopped target network.target - Network. Feb 13 15:38:02.193831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:38:02.193902 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:02.206416 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:38:02.208339 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:38:02.212600 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:02.218530 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:38:02.222420 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:38:02.226809 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:38:02.226870 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:02.232549 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:38:02.232605 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:02.236820 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:38:02.236884 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:38:02.240897 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:38:02.240959 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:02.245408 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:38:02.249495 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:38:02.254936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:38:02.255453 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:38:02.255536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:38:02.258852 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:38:02.258950 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:02.269742 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 15:38:02.273331 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:38:02.273474 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:38:02.276381 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:38:02.276490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:38:02.282219 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:38:02.282275 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:02.296306 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:38:02.300569 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:38:02.300639 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:02.303475 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:02.303523 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:02.307900 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:38:02.307957 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:02.312753 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:38:02.312799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:02.331035 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:02.345286 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:38:02.345455 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:02.350297 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:38:02.350339 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:02.359514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:38:02.359566 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:02.366692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:38:02.366772 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:02.373771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:38:02.373836 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:02.378532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:02.378581 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:02.391471 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: Data path switched from VF: enP4093s1 Feb 13 15:38:02.395761 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:38:02.400551 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:38:02.400649 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:02.407903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:02.407959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:02.413026 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:38:02.413121 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:38:02.419601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:38:02.419734 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:38:02.423868 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:38:02.440905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:38:02.450911 systemd[1]: Switching root. Feb 13 15:38:02.552150 systemd-journald[177]: Journal stopped Feb 13 15:37:53.053508 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:37:53.053545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:37:53.053561 kernel: BIOS-provided physical RAM map: Feb 13 15:37:53.053572 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:37:53.053582 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 15:37:53.053593 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 15:37:53.053607 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 15:37:53.053622 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 15:37:53.053656 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 15:37:53.053668 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 15:37:53.053680 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 15:37:53.053691 kernel: printk: bootconsole [earlyser0] enabled Feb 13 15:37:53.053702 kernel: NX (Execute Disable) protection: active Feb 13 15:37:53.053714 kernel: APIC: Static calls initialized Feb 13 15:37:53.053731 kernel: efi: EFI v2.7 by Microsoft Feb 13 15:37:53.053744 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Feb 13 15:37:53.053757 kernel: random: crng init done Feb 13 15:37:53.053769 kernel: secureboot: Secure boot disabled Feb 13 15:37:53.053781 kernel: SMBIOS 3.1.0 present. Feb 13 15:37:53.053794 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 15:37:53.053806 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 15:37:53.053819 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 15:37:53.053832 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 15:37:53.053845 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 15:37:53.053860 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 15:37:53.053873 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 15:37:53.053886 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:37:53.053899 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:37:53.053913 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 15:37:53.053926 kernel: tsc: Detected 2593.908 MHz processor Feb 13 15:37:53.053939 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:37:53.053953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:37:53.053966 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 15:37:53.053982 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:37:53.053995 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:37:53.054008 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 15:37:53.054020 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 15:37:53.054033 kernel: Using GB pages for direct mapping Feb 13 15:37:53.054046 kernel: ACPI: Early table checksum verification disabled Feb 13 15:37:53.054059 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 15:37:53.054078 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054095 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054109 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 15:37:53.054122 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 15:37:53.054136 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054151 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054165 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054181 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054196 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054210 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054224 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:37:53.054237 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 15:37:53.054251 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 15:37:53.054265 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 15:37:53.054279 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 15:37:53.054293 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 15:37:53.054309 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 15:37:53.054323 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 15:37:53.054336 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 15:37:53.054350 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 15:37:53.054364 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 15:37:53.054378 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:37:53.054392 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:37:53.054406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 15:37:53.054420 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 15:37:53.054436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 15:37:53.054450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 15:37:53.054464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 15:37:53.054478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 15:37:53.054492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 15:37:53.054506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 15:37:53.054519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 15:37:53.054534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 15:37:53.054551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 15:37:53.054565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 15:37:53.054579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 15:37:53.054592 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 15:37:53.054606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 15:37:53.054620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 15:37:53.054656 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 15:37:53.054668 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 15:37:53.054681 kernel: Zone ranges: Feb 13 15:37:53.054698 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:37:53.054712 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:37:53.054726 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:37:53.054740 kernel: Movable zone start for each node Feb 13 15:37:53.054754 kernel: Early memory node ranges Feb 13 15:37:53.054766 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:37:53.054778 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 15:37:53.054790 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 15:37:53.054802 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:37:53.054818 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 15:37:53.054832 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:37:53.054845 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:37:53.054858 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 15:37:53.054869 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 15:37:53.054880 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 15:37:53.054890 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:37:53.054903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:37:53.054916 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:37:53.054930 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 15:37:53.054943 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:37:53.054955 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 15:37:53.054967 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 15:37:53.054981 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:37:53.054993 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:37:53.055010 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:37:53.055024 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:37:53.055038 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:37:53.055056 kernel: Hyper-V: PV spinlocks enabled Feb 13 15:37:53.055070 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:37:53.055086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:37:53.055101 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:37:53.055116 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:37:53.055129 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:37:53.055140 kernel: Fallback order for Node 0: 0 Feb 13 15:37:53.055152 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 15:37:53.055167 kernel: Policy zone: Normal Feb 13 15:37:53.055190 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:37:53.055204 kernel: software IO TLB: area num 2. Feb 13 15:37:53.055221 kernel: Memory: 8069620K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 317584K reserved, 0K cma-reserved) Feb 13 15:37:53.055236 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:37:53.055250 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:37:53.055265 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:37:53.055281 kernel: Dynamic Preempt: voluntary Feb 13 15:37:53.055297 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:37:53.055314 kernel: rcu: RCU event tracing is enabled. Feb 13 15:37:53.055329 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:37:53.055348 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:37:53.055365 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:37:53.055381 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:37:53.055396 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:37:53.055409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:37:53.055424 kernel: Using NULL legacy PIC Feb 13 15:37:53.055443 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 15:37:53.055456 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:37:53.055469 kernel: Console: colour dummy device 80x25 Feb 13 15:37:53.055482 kernel: printk: console [tty1] enabled Feb 13 15:37:53.055496 kernel: printk: console [ttyS0] enabled Feb 13 15:37:53.055509 kernel: printk: bootconsole [earlyser0] disabled Feb 13 15:37:53.055522 kernel: ACPI: Core revision 20230628 Feb 13 15:37:53.055535 kernel: Failed to register legacy timer interrupt Feb 13 15:37:53.055548 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:37:53.055564 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:37:53.055576 kernel: Hyper-V: Using IPI hypercalls Feb 13 15:37:53.055585 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 15:37:53.055596 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 15:37:53.055607 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 15:37:53.055616 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 15:37:53.055638 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 15:37:53.055650 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 15:37:53.055661 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Feb 13 15:37:53.055676 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:37:53.055685 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:37:53.055696 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:37:53.055705 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:37:53.055715 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:37:53.055723 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:37:53.055734 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:37:53.055743 kernel: RETBleed: Vulnerable Feb 13 15:37:53.055753 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:37:53.055761 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:37:53.055774 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:37:53.055784 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:37:53.055793 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:37:53.055802 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:37:53.055812 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:37:53.055821 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:37:53.055834 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:37:53.055842 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:37:53.055854 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 15:37:53.055862 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 15:37:53.055873 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 15:37:53.055884 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 15:37:53.055895 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:37:53.055903 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:37:53.055914 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:37:53.055922 kernel: landlock: Up and running. Feb 13 15:37:53.055933 kernel: SELinux: Initializing. Feb 13 15:37:53.055943 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.055953 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.055962 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:37:53.055972 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:37:53.055981 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:37:53.055994 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:37:53.056005 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:37:53.056013 kernel: signal: max sigframe size: 3632 Feb 13 15:37:53.056024 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:37:53.056033 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:37:53.056044 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:37:53.056053 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:37:53.056063 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:37:53.056072 kernel: .... node #0, CPUs: #1 Feb 13 15:37:53.056086 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 15:37:53.056096 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:37:53.056107 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:37:53.056115 kernel: smpboot: Max logical packages: 1 Feb 13 15:37:53.056126 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Feb 13 15:37:53.056134 kernel: devtmpfs: initialized Feb 13 15:37:53.056145 kernel: x86/mm: Memory block size: 128MB Feb 13 15:37:53.056153 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 15:37:53.056166 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:37:53.056175 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:37:53.056186 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:37:53.056194 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:37:53.056205 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:37:53.056213 kernel: audit: type=2000 audit(1739461071.027:1): state=initialized audit_enabled=0 res=1 Feb 13 15:37:53.056224 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:37:53.056232 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:37:53.056243 kernel: cpuidle: using governor menu Feb 13 15:37:53.056255 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:37:53.056265 kernel: dca service started, version 1.12.1 Feb 13 15:37:53.056277 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 15:37:53.056285 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:37:53.056296 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:37:53.056304 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:37:53.056315 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:37:53.056324 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:37:53.056335 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:37:53.056345 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:37:53.056356 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:37:53.056365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:37:53.056375 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:37:53.056383 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:37:53.056394 kernel: ACPI: Interpreter enabled Feb 13 15:37:53.056402 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:37:53.056414 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:37:53.056422 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:37:53.056435 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:37:53.056444 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 15:37:53.056454 kernel: iommu: Default domain type: Translated Feb 13 15:37:53.056464 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:37:53.056474 kernel: efivars: Registered efivars operations Feb 13 15:37:53.056483 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:37:53.056493 kernel: PCI: System does not support PCI Feb 13 15:37:53.056501 kernel: vgaarb: loaded Feb 13 15:37:53.056512 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 15:37:53.056524 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:37:53.056534 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:37:53.056542 kernel: pnp: PnP ACPI init Feb 13 15:37:53.056553 kernel: pnp: PnP ACPI: found 3 devices Feb 13 15:37:53.056562 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:37:53.056572 kernel: NET: Registered PF_INET protocol family Feb 13 15:37:53.056581 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:37:53.056592 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:37:53.056603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:37:53.056614 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:37:53.056625 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:37:53.056642 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:37:53.056650 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.056662 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:37:53.056670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:37:53.056681 kernel: NET: Registered PF_XDP protocol family Feb 13 15:37:53.056689 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:37:53.056701 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:37:53.056712 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Feb 13 15:37:53.056723 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:37:53.056732 kernel: Initialise system trusted keyrings Feb 13 15:37:53.056742 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:37:53.056750 kernel: Key type asymmetric registered Feb 13 15:37:53.056761 kernel: Asymmetric key parser 'x509' registered Feb 13 15:37:53.056769 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:37:53.056780 kernel: io scheduler mq-deadline registered Feb 13 15:37:53.056788 kernel: io scheduler kyber registered Feb 13 15:37:53.056801 kernel: io scheduler bfq registered Feb 13 15:37:53.056810 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:37:53.056821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:37:53.056829 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:37:53.056840 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:37:53.056849 kernel: i8042: PNP: No PS/2 controller found. Feb 13 15:37:53.057001 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 15:37:53.057099 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T15:37:52 UTC (1739461072) Feb 13 15:37:53.057195 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 15:37:53.057209 kernel: intel_pstate: CPU model not supported Feb 13 15:37:53.057217 kernel: efifb: probing for efifb Feb 13 15:37:53.057228 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:37:53.057237 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:37:53.057248 kernel: efifb: scrolling: redraw Feb 13 15:37:53.057259 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:37:53.057268 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:37:53.057281 kernel: fb0: EFI VGA frame buffer device Feb 13 15:37:53.057289 kernel: pstore: Using crash dump compression: deflate Feb 13 15:37:53.057300 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:37:53.057308 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:37:53.057319 kernel: Segment Routing with IPv6 Feb 13 15:37:53.057327 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:37:53.057339 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:37:53.057347 kernel: Key type dns_resolver registered Feb 13 15:37:53.057358 kernel: IPI shorthand broadcast: enabled Feb 13 15:37:53.057366 kernel: sched_clock: Marking stable (790009600, 38683000)->(1013154400, -184461800) Feb 13 15:37:53.057380 kernel: registered taskstats version 1 Feb 13 15:37:53.057391 kernel: Loading compiled-in X.509 certificates Feb 13 15:37:53.057402 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:37:53.057447 kernel: Key type .fscrypt registered Feb 13 15:37:53.057469 kernel: Key type fscrypt-provisioning registered Feb 13 15:37:53.057489 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:37:53.057506 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:37:53.057522 kernel: ima: No architecture policies found Feb 13 15:37:53.057548 kernel: clk: Disabling unused clocks Feb 13 15:37:53.057562 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:37:53.057578 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:37:53.057593 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:37:53.057609 kernel: Run /init as init process Feb 13 15:37:53.057753 kernel: with arguments: Feb 13 15:37:53.057776 kernel: /init Feb 13 15:37:53.057791 kernel: with environment: Feb 13 15:37:53.057807 kernel: HOME=/ Feb 13 15:37:53.057821 kernel: TERM=linux Feb 13 15:37:53.057841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:37:53.057861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:37:53.057883 systemd[1]: Detected virtualization microsoft. Feb 13 15:37:53.057901 systemd[1]: Detected architecture x86-64. Feb 13 15:37:53.057919 systemd[1]: Running in initrd. Feb 13 15:37:53.057935 systemd[1]: No hostname configured, using default hostname. Feb 13 15:37:53.057954 systemd[1]: Hostname set to . Feb 13 15:37:53.057977 systemd[1]: Initializing machine ID from random generator. Feb 13 15:37:53.057993 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:37:53.058012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:37:53.058028 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:37:53.058048 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:37:53.058069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:37:53.058093 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:37:53.058111 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:37:53.058140 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:37:53.058155 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:37:53.058169 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:37:53.058187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:37:53.058200 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:37:53.058213 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:37:53.058226 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:37:53.058245 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:37:53.058259 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:37:53.058275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:37:53.058292 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:37:53.058309 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:37:53.058324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:37:53.058339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:37:53.058353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:37:53.058371 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:37:53.058387 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:37:53.058409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:37:53.058424 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:37:53.058437 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:37:53.058451 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:37:53.058466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:37:53.058508 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 15:37:53.058547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:53.058563 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:37:53.058580 systemd-journald[177]: Journal started Feb 13 15:37:53.058616 systemd-journald[177]: Runtime Journal (/run/log/journal/f63a0330c1ac4cab964cc07a7bacedcd) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:37:53.066061 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:37:53.066528 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:37:53.067272 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:37:53.081431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:37:53.090262 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 15:37:53.092802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:37:53.100689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:53.103918 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:37:53.122317 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:37:53.138043 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:37:53.136816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:37:53.137170 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:37:53.156097 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 15:37:53.158247 kernel: Bridge firewalling registered Feb 13 15:37:53.158845 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:37:53.161933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:37:53.172861 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:37:53.184821 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:37:53.189545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:37:53.202683 dracut-cmdline[207]: dracut-dracut-053 Feb 13 15:37:53.203143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:37:53.211898 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:37:53.224897 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:37:53.274733 systemd-resolved[221]: Positive Trust Anchors: Feb 13 15:37:53.274748 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:37:53.274806 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:37:53.297420 systemd-resolved[221]: Defaulting to hostname 'linux'. Feb 13 15:37:53.305161 kernel: SCSI subsystem initialized Feb 13 15:37:53.298660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:37:53.307834 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:37:53.315649 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:37:53.326648 kernel: iscsi: registered transport (tcp) Feb 13 15:37:53.347399 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:37:53.347467 kernel: QLogic iSCSI HBA Driver Feb 13 15:37:53.382932 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:37:53.391808 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:37:53.420214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:37:53.420319 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:37:53.423133 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:37:53.463655 kernel: raid6: avx512x4 gen() 18547 MB/s Feb 13 15:37:53.483648 kernel: raid6: avx512x2 gen() 18459 MB/s Feb 13 15:37:53.502652 kernel: raid6: avx512x1 gen() 18503 MB/s Feb 13 15:37:53.521638 kernel: raid6: avx2x4 gen() 18350 MB/s Feb 13 15:37:53.540647 kernel: raid6: avx2x2 gen() 18373 MB/s Feb 13 15:37:53.560201 kernel: raid6: avx2x1 gen() 14003 MB/s Feb 13 15:37:53.560245 kernel: raid6: using algorithm avx512x4 gen() 18547 MB/s Feb 13 15:37:53.581023 kernel: raid6: .... xor() 6955 MB/s, rmw enabled Feb 13 15:37:53.581075 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:37:53.602655 kernel: xor: automatically using best checksumming function avx Feb 13 15:37:53.749657 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:37:53.759480 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:37:53.769809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:37:53.783570 systemd-udevd[396]: Using default interface naming scheme 'v255'. Feb 13 15:37:53.788187 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:37:53.801790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:37:53.813940 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 13 15:37:53.842414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:37:53.851787 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:37:53.892376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:37:53.907906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:37:53.932339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:37:53.942566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:37:53.948739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:37:53.954100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:37:53.964195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:37:53.981650 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:37:54.000151 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:37:54.008942 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:37:54.008972 kernel: AES CTR mode by8 optimization enabled Feb 13 15:37:54.017116 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 15:37:54.017888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:37:54.018122 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:37:54.029931 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:37:54.035600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:37:54.035878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:54.038770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:54.058712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:54.069079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:37:54.078336 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:37:54.078368 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:37:54.069236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:54.083650 kernel: PTP clock support registered Feb 13 15:37:54.086849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:37:54.094900 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:37:54.101409 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:37:54.101456 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:37:54.114588 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:37:54.114667 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:37:54.116418 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:37:54.118496 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:37:54.120384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:37:54.227098 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:37:54.227125 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:37:54.223283 systemd-resolved[221]: Clock change detected. Flushing caches. Feb 13 15:37:54.241377 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:37:54.254700 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:37:54.264836 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:37:54.264892 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:37:54.264912 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:37:54.273505 kernel: scsi host0: storvsc_host_t Feb 13 15:37:54.273751 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:37:54.278646 kernel: scsi host1: storvsc_host_t Feb 13 15:37:54.282663 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:37:54.285662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:37:54.305401 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:37:54.307660 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:37:54.307683 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:37:54.319887 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:37:54.335442 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:37:54.335681 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:37:54.335865 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:37:54.336028 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:37:54.336205 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:54.336227 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:37:54.427358 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: VF slot 1 added Feb 13 15:37:54.438452 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:37:54.438523 kernel: hv_pci fbfc5ecf-0ffd-4390-81a9-4eae4d852e84: PCI VMBus probing: Using version 0x10004 Feb 13 15:37:54.479049 kernel: hv_pci fbfc5ecf-0ffd-4390-81a9-4eae4d852e84: PCI host bridge to bus 0ffd:00 Feb 13 15:37:54.479631 kernel: pci_bus 0ffd:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 15:37:54.479812 kernel: pci_bus 0ffd:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:37:54.479958 kernel: pci 0ffd:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 15:37:54.480158 kernel: pci 0ffd:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:37:54.480325 kernel: pci 0ffd:00:02.0: enabling Extended Tags Feb 13 15:37:54.480494 kernel: pci 0ffd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0ffd:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 15:37:54.480681 kernel: pci_bus 0ffd:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:37:54.480832 kernel: pci 0ffd:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:37:54.643935 kernel: mlx5_core 0ffd:00:02.0: enabling device (0000 -> 0002) Feb 13 15:37:54.865911 kernel: mlx5_core 0ffd:00:02.0: firmware version: 14.30.5000 Feb 13 15:37:54.866152 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: VF registering: eth1 Feb 13 15:37:54.866333 kernel: mlx5_core 0ffd:00:02.0 eth1: joined to eth0 Feb 13 15:37:54.866521 kernel: mlx5_core 0ffd:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 15:37:54.825010 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:37:54.874634 kernel: mlx5_core 0ffd:00:02.0 enP4093s1: renamed from eth1 Feb 13 15:37:54.898637 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Feb 13 15:37:54.905452 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:37:54.923528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:37:54.935734 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Feb 13 15:37:54.950296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:37:54.953371 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:37:54.969809 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:37:54.986677 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:54.996636 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:56.005491 disk-uuid[602]: The operation has completed successfully. Feb 13 15:37:56.009600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:37:56.084766 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:37:56.084884 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:37:56.105767 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:37:56.111106 sh[688]: Success Feb 13 15:37:56.142396 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:37:56.335647 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:37:56.348737 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:37:56.353242 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:37:56.368629 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:37:56.368678 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:56.373512 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:37:56.376309 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:37:56.378523 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:37:56.648090 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:37:56.653282 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:37:56.661791 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:37:56.669799 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:37:56.684543 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:56.684595 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:56.684648 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:37:56.702629 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:37:56.716511 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:37:56.718602 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:56.727240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:37:56.736803 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:37:56.760583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:37:56.769896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:37:56.788701 systemd-networkd[872]: lo: Link UP Feb 13 15:37:56.788710 systemd-networkd[872]: lo: Gained carrier Feb 13 15:37:56.790826 systemd-networkd[872]: Enumeration completed Feb 13 15:37:56.790907 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:37:56.794046 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:37:56.794050 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:37:56.794904 systemd[1]: Reached target network.target - Network. Feb 13 15:37:56.855638 kernel: mlx5_core 0ffd:00:02.0 enP4093s1: Link up Feb 13 15:37:56.888639 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: Data path switched to VF: enP4093s1 Feb 13 15:37:56.889199 systemd-networkd[872]: enP4093s1: Link UP Feb 13 15:37:56.891566 systemd-networkd[872]: eth0: Link UP Feb 13 15:37:56.891869 systemd-networkd[872]: eth0: Gained carrier Feb 13 15:37:56.891886 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:37:56.894860 systemd-networkd[872]: enP4093s1: Gained carrier Feb 13 15:37:56.914654 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:37:57.535542 ignition[835]: Ignition 2.20.0 Feb 13 15:37:57.535555 ignition[835]: Stage: fetch-offline Feb 13 15:37:57.537145 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:37:57.535603 ignition[835]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.535627 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.535749 ignition[835]: parsed url from cmdline: "" Feb 13 15:37:57.535754 ignition[835]: no config URL provided Feb 13 15:37:57.535761 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:37:57.535771 ignition[835]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:37:57.535780 ignition[835]: failed to fetch config: resource requires networking Feb 13 15:37:57.536027 ignition[835]: Ignition finished successfully Feb 13 15:37:57.562798 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:37:57.577648 ignition[880]: Ignition 2.20.0 Feb 13 15:37:57.577659 ignition[880]: Stage: fetch Feb 13 15:37:57.577871 ignition[880]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.577884 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.578009 ignition[880]: parsed url from cmdline: "" Feb 13 15:37:57.578014 ignition[880]: no config URL provided Feb 13 15:37:57.578020 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:37:57.578030 ignition[880]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:37:57.578056 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:37:57.652215 ignition[880]: GET result: OK Feb 13 15:37:57.652350 ignition[880]: config has been read from IMDS userdata Feb 13 15:37:57.652389 ignition[880]: parsing config with SHA512: 301cf5957c45b04c89ba8f1c490fa91bed39f4e83efb43d7ca6cda35644e157c009f22433f2a6f7d1b5d6b00089dda85506f79d525e200895bcc6e1e5486b340 Feb 13 15:37:57.660482 unknown[880]: fetched base config from "system" Feb 13 15:37:57.660502 unknown[880]: fetched base config from "system" Feb 13 15:37:57.661219 ignition[880]: fetch: fetch complete Feb 13 15:37:57.660512 unknown[880]: fetched user config from "azure" Feb 13 15:37:57.661225 ignition[880]: fetch: fetch passed Feb 13 15:37:57.663067 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:37:57.661278 ignition[880]: Ignition finished successfully Feb 13 15:37:57.676829 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:37:57.691533 ignition[886]: Ignition 2.20.0 Feb 13 15:37:57.691544 ignition[886]: Stage: kargs Feb 13 15:37:57.691782 ignition[886]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.694848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:37:57.691796 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.692718 ignition[886]: kargs: kargs passed Feb 13 15:37:57.692766 ignition[886]: Ignition finished successfully Feb 13 15:37:57.705755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:37:57.720724 ignition[892]: Ignition 2.20.0 Feb 13 15:37:57.720735 ignition[892]: Stage: disks Feb 13 15:37:57.722729 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:37:57.720951 ignition[892]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:57.720965 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:57.721843 ignition[892]: disks: disks passed Feb 13 15:37:57.721886 ignition[892]: Ignition finished successfully Feb 13 15:37:57.735016 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:37:57.737723 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:37:57.742642 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:37:57.742739 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:37:57.743086 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:37:57.756877 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:37:57.836689 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:37:57.842260 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:37:57.853089 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:37:57.943631 kernel: EXT4-fs (sda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:37:57.944145 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:37:57.948645 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:37:57.991732 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:37:57.996342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:37:58.001117 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:37:58.009727 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (911) Feb 13 15:37:58.010233 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:37:58.010401 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:37:58.026837 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:58.026899 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:58.026924 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:37:58.029059 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:37:58.036629 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:37:58.037006 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:37:58.042371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:37:58.596120 systemd-networkd[872]: enP4093s1: Gained IPv6LL Feb 13 15:37:58.642535 coreos-metadata[913]: Feb 13 15:37:58.642 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:37:58.646781 coreos-metadata[913]: Feb 13 15:37:58.645 INFO Fetch successful Feb 13 15:37:58.646781 coreos-metadata[913]: Feb 13 15:37:58.645 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:37:58.658723 coreos-metadata[913]: Feb 13 15:37:58.658 INFO Fetch successful Feb 13 15:37:58.677146 coreos-metadata[913]: Feb 13 15:37:58.677 INFO wrote hostname ci-4152.2.1-a-02a9d39241 to /sysroot/etc/hostname Feb 13 15:37:58.679668 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:37:58.693179 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:37:58.727208 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:37:58.732155 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:37:58.751061 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:37:58.784841 systemd-networkd[872]: eth0: Gained IPv6LL Feb 13 15:37:59.763801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:37:59.772708 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:37:59.779790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:37:59.788673 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:59.789681 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:37:59.817470 ignition[1031]: INFO : Ignition 2.20.0 Feb 13 15:37:59.817470 ignition[1031]: INFO : Stage: mount Feb 13 15:37:59.821805 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:59.821805 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:59.827715 ignition[1031]: INFO : mount: mount passed Feb 13 15:37:59.829455 ignition[1031]: INFO : Ignition finished successfully Feb 13 15:37:59.828644 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:37:59.837077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:37:59.843721 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:37:59.854796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:37:59.868765 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1044) Feb 13 15:37:59.868820 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:37:59.871503 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:37:59.873694 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:37:59.878851 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:37:59.880268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:37:59.902592 ignition[1061]: INFO : Ignition 2.20.0 Feb 13 15:37:59.902592 ignition[1061]: INFO : Stage: files Feb 13 15:37:59.906399 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:37:59.906399 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:37:59.906399 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:37:59.937583 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:37:59.937583 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:38:00.016902 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:00.020521 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:00.017439 unknown[1061]: wrote ssh authorized keys file for user: core Feb 13 15:38:00.096069 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:38:00.544369 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:00.549530 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:00.549530 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:01.048862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:38:01.166887 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:01.171770 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:01.171770 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:01.171770 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:01.184264 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.207996 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:38:01.543744 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:38:01.799138 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:01.799138 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:38:01.807792 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:01.812306 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:01.812306 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:38:01.819385 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:01.822533 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:01.825919 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:01.829666 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:01.833451 ignition[1061]: INFO : files: files passed Feb 13 15:38:01.835147 ignition[1061]: INFO : Ignition finished successfully Feb 13 15:38:01.834650 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:38:01.846859 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:38:01.852567 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:38:01.855471 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:38:01.855562 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:38:01.874419 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:01.874419 initrd-setup-root-after-ignition[1089]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:01.884325 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:01.877942 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:01.881179 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:38:01.896837 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:38:01.920966 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:38:01.921091 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:38:01.925990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:38:01.933018 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:38:01.933542 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:38:01.944830 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:38:01.959233 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:01.967785 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:38:01.978907 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:01.981662 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:01.989018 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:38:01.991539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:38:01.993917 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:02.000670 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:38:02.003037 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:38:02.008952 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:38:02.011280 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:02.018770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:02.021516 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:38:02.027952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:02.030884 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:38:02.037760 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:38:02.040366 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:38:02.044104 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:38:02.044277 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:02.048689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:02.052658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:02.062650 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:38:02.065796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:02.068770 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:38:02.068955 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:02.077928 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:38:02.083256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:02.088252 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:38:02.088387 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:38:02.099392 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:38:02.099556 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:38:02.111844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:38:02.114048 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:38:02.114243 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:02.128907 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:38:02.130940 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:38:02.138529 ignition[1113]: INFO : Ignition 2.20.0 Feb 13 15:38:02.138529 ignition[1113]: INFO : Stage: umount Feb 13 15:38:02.138529 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:02.138529 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:38:02.138529 ignition[1113]: INFO : umount: umount passed Feb 13 15:38:02.138529 ignition[1113]: INFO : Ignition finished successfully Feb 13 15:38:02.131129 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:02.138464 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:38:02.138655 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:02.152714 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:38:02.154640 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:38:02.158500 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:38:02.158588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:38:02.167395 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:38:02.167450 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:38:02.171769 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:38:02.173960 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:38:02.181169 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:38:02.181228 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:38:02.189869 systemd[1]: Stopped target network.target - Network. Feb 13 15:38:02.193831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:38:02.193902 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:02.206416 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:38:02.208339 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:38:02.212600 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:02.218530 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:38:02.222420 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:38:02.226809 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:38:02.226870 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:02.232549 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:38:02.232605 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:02.236820 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:38:02.236884 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:38:02.240897 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:38:02.240959 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:02.245408 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:38:02.249495 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:38:02.254936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:38:02.255453 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:38:02.255536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:38:02.258852 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:38:02.258950 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:02.269742 systemd-networkd[872]: eth0: DHCPv6 lease lost Feb 13 15:38:02.273331 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:38:02.273474 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:38:02.276381 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:38:02.276490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:38:02.282219 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:38:02.282275 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:02.296306 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:38:02.300569 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:38:02.300639 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:02.303475 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:02.303523 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:02.307900 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:38:02.307957 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:02.312753 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:38:02.312799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:02.331035 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:02.345286 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:38:02.345455 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:02.350297 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:38:02.350339 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:02.359514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:38:02.359566 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:02.366692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:38:02.366772 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:02.373771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:38:02.373836 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:02.378532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:02.378581 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:02.391471 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: Data path switched from VF: enP4093s1 Feb 13 15:38:02.395761 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:38:02.400551 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:38:02.400649 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:02.407903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:02.407959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:02.413026 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:38:02.413121 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:38:02.419601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:38:02.419734 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:38:02.423868 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:38:02.440905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:38:02.450911 systemd[1]: Switching root. Feb 13 15:38:02.552150 systemd-journald[177]: Journal stopped Feb 13 15:38:08.212994 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Feb 13 15:38:08.213027 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:38:08.213041 kernel: SELinux: policy capability open_perms=1 Feb 13 15:38:08.213051 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:38:08.213061 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:38:08.213071 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:38:08.213081 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:38:08.213094 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:38:08.213103 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:38:08.213115 kernel: audit: type=1403 audit(1739461084.987:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:38:08.213124 systemd[1]: Successfully loaded SELinux policy in 123.956ms. Feb 13 15:38:08.213137 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.570ms. Feb 13 15:38:08.213148 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:38:08.213160 systemd[1]: Detected virtualization microsoft. Feb 13 15:38:08.213175 systemd[1]: Detected architecture x86-64. Feb 13 15:38:08.213186 systemd[1]: Detected first boot. Feb 13 15:38:08.213198 systemd[1]: Hostname set to . Feb 13 15:38:08.213210 systemd[1]: Initializing machine ID from random generator. Feb 13 15:38:08.213220 zram_generator::config[1156]: No configuration found. Feb 13 15:38:08.213235 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:38:08.213244 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:38:08.213257 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:38:08.213267 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:38:08.213280 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:38:08.213291 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:38:08.213303 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:38:08.213318 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:38:08.213328 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:38:08.213340 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:38:08.213352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:38:08.213363 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:38:08.213375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:08.213385 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:08.213398 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:38:08.213411 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:38:08.213422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:38:08.213439 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:08.213449 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:38:08.213461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:08.213471 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:38:08.213487 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:38:08.213500 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:08.213514 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:38:08.213526 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:08.213539 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:08.213549 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:08.213560 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:08.213570 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:38:08.213580 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:38:08.213592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:08.213602 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:08.213620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:08.213631 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:38:08.213644 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:38:08.213659 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:38:08.213671 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:38:08.213683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:08.213696 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:38:08.213706 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:38:08.213719 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:38:08.213730 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:38:08.213743 systemd[1]: Reached target machines.target - Containers. Feb 13 15:38:08.213757 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:38:08.213770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:08.213781 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:08.213793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:38:08.213806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:08.213818 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:08.213828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:08.213838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:38:08.213851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:08.213864 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:38:08.213876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:38:08.213888 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:38:08.213899 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:38:08.213911 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:38:08.213924 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:08.213934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:08.213947 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:38:08.213963 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:38:08.213973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:08.213986 kernel: ACPI: bus type drm_connector registered Feb 13 15:38:08.213996 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:38:08.214008 kernel: loop: module loaded Feb 13 15:38:08.214019 systemd[1]: Stopped verity-setup.service. Feb 13 15:38:08.214047 systemd-journald[1255]: Collecting audit messages is disabled. Feb 13 15:38:08.214076 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:08.214087 systemd-journald[1255]: Journal started Feb 13 15:38:08.214112 systemd-journald[1255]: Runtime Journal (/run/log/journal/2a3ed274d9bf45b99779c201d6890455) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:38:07.453550 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:38:07.640271 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:38:07.640692 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:38:08.223652 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:08.223700 kernel: fuse: init (API version 7.39) Feb 13 15:38:08.229077 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:38:08.231820 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:38:08.234425 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:38:08.236945 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:38:08.239714 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:38:08.242260 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:38:08.245193 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:38:08.248429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:08.259182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:38:08.259513 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:38:08.262799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:08.263056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:08.266309 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:08.266602 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:08.270015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:08.270295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:08.273593 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:38:08.273894 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:38:08.277059 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:08.277340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:08.280448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:08.283718 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:38:08.287849 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:38:08.307169 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:38:08.318010 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:38:08.323771 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:38:08.326741 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:38:08.326790 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:08.330575 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:38:08.334827 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:38:08.344804 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:38:08.347577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:08.348913 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:38:08.356730 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:38:08.359668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:08.362603 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:38:08.365340 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:08.369429 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:08.374832 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:38:08.385786 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:38:08.392468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:08.395726 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:38:08.398663 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:38:08.401145 systemd-journald[1255]: Time spent on flushing to /var/log/journal/2a3ed274d9bf45b99779c201d6890455 is 27.314ms for 961 entries. Feb 13 15:38:08.401145 systemd-journald[1255]: System Journal (/var/log/journal/2a3ed274d9bf45b99779c201d6890455) is 8.0M, max 2.6G, 2.6G free. Feb 13 15:38:08.546168 systemd-journald[1255]: Received client request to flush runtime journal. Feb 13 15:38:08.546230 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 15:38:08.403908 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:38:08.420865 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:38:08.434733 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:38:08.439180 udevadm[1300]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:38:08.440799 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:38:08.450785 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:38:08.547574 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:38:08.556586 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:08.586831 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:38:08.587516 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:38:08.664462 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:38:08.677787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:08.732023 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Feb 13 15:38:08.732050 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Feb 13 15:38:08.739509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:08.880644 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:38:08.907704 kernel: loop1: detected capacity change from 0 to 211296 Feb 13 15:38:08.964676 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 15:38:09.399647 kernel: loop3: detected capacity change from 0 to 28272 Feb 13 15:38:09.682375 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:38:09.690846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:09.715254 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Feb 13 15:38:09.882639 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 15:38:09.887449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:09.900049 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:09.906676 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 15:38:09.935637 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 15:38:09.960554 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:38:09.960754 kernel: loop7: detected capacity change from 0 to 28272 Feb 13 15:38:09.977160 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 15:38:09.977924 (sd-merge)[1319]: Merged extensions into '/usr'. Feb 13 15:38:09.981804 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:38:10.076641 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:38:10.089082 systemd[1]: Reloading requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:38:10.089101 systemd[1]: Reloading... Feb 13 15:38:10.143658 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:38:10.152639 kernel: hv_vmbus: registering driver hv_balloon Feb 13 15:38:10.160673 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 15:38:10.216636 zram_generator::config[1386]: No configuration found. Feb 13 15:38:10.242744 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 15:38:10.248005 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 15:38:10.248092 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 15:38:10.258479 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:38:10.266670 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:38:10.277844 systemd-networkd[1324]: lo: Link UP Feb 13 15:38:10.280245 systemd-networkd[1324]: lo: Gained carrier Feb 13 15:38:10.292804 systemd-networkd[1324]: Enumeration completed Feb 13 15:38:10.293699 systemd-networkd[1324]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:10.295519 systemd-networkd[1324]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:10.409170 kernel: mlx5_core 0ffd:00:02.0 enP4093s1: Link up Feb 13 15:38:10.432656 kernel: hv_netvsc 6045bde0-bfdf-6045-bde0-bfdf6045bde0 eth0: Data path switched to VF: enP4093s1 Feb 13 15:38:10.444304 systemd-networkd[1324]: enP4093s1: Link UP Feb 13 15:38:10.444448 systemd-networkd[1324]: eth0: Link UP Feb 13 15:38:10.444453 systemd-networkd[1324]: eth0: Gained carrier Feb 13 15:38:10.444478 systemd-networkd[1324]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:10.510763 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1328) Feb 13 15:38:10.509033 systemd-networkd[1324]: enP4093s1: Gained carrier Feb 13 15:38:10.537743 systemd-networkd[1324]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:38:10.692656 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Feb 13 15:38:10.711208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:10.810927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:38:10.815654 systemd[1]: Reloading finished in 725 ms. Feb 13 15:38:10.842303 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:10.845659 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:38:10.884883 systemd[1]: Starting ensure-sysext.service... Feb 13 15:38:10.891712 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:38:10.896692 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:38:10.906099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:10.911887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:10.928745 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:38:10.933873 systemd[1]: Reloading requested from client PID 1509 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:38:10.934395 systemd[1]: Reloading... Feb 13 15:38:10.936055 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:38:10.936568 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:38:10.937853 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:38:10.938272 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Feb 13 15:38:10.938361 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Feb 13 15:38:10.961249 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:10.961261 systemd-tmpfiles[1512]: Skipping /boot Feb 13 15:38:10.975295 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:10.975432 systemd-tmpfiles[1512]: Skipping /boot Feb 13 15:38:11.028635 zram_generator::config[1548]: No configuration found. Feb 13 15:38:11.155561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:11.241324 systemd[1]: Reloading finished in 306 ms. Feb 13 15:38:11.262878 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:38:11.271251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:11.286766 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:38:11.291737 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:38:11.298832 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:38:11.306387 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:38:11.313922 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:11.319913 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:38:11.324476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:11.339581 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:11.339880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:11.344930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:11.358923 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:11.372911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:11.378874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:11.379060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:11.380560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:11.380794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:11.384106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:11.384283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:11.388680 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:11.389254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:11.402730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:11.403221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:11.411159 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:11.423956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:11.432691 lvm[1613]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:11.434006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:11.436897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:11.437171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:11.442712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:11.442922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:11.449162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:11.449942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:11.462922 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:38:11.468116 augenrules[1647]: No rules Feb 13 15:38:11.471272 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:38:11.471495 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:38:11.479556 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:38:11.479953 systemd-resolved[1621]: Positive Trust Anchors: Feb 13 15:38:11.480200 systemd-resolved[1621]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:11.480281 systemd-resolved[1621]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:11.482878 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:11.483037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:11.491099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:11.493937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:11.494201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:11.498813 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:38:11.501019 systemd-resolved[1621]: Using system hostname 'ci-4152.2.1-a-02a9d39241'. Feb 13 15:38:11.504845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:11.510892 lvm[1658]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:11.511786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:11.515808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:11.520816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:11.520915 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:38:11.523500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:11.523829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:11.526935 systemd[1]: Finished ensure-sysext.service. Feb 13 15:38:11.532358 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:38:11.542285 systemd[1]: Reached target network.target - Network. Feb 13 15:38:11.545262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:11.549278 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:38:11.555820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:11.556010 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:11.559055 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:11.559231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:11.562093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:11.562254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:11.566299 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:11.566388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:11.712961 systemd-networkd[1324]: enP4093s1: Gained IPv6LL Feb 13 15:38:11.741051 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:38:11.744436 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:38:12.417760 systemd-networkd[1324]: eth0: Gained IPv6LL Feb 13 15:38:12.421258 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:38:12.425147 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:38:14.863405 ldconfig[1287]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:38:14.875062 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:38:14.883829 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:38:14.904995 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:38:14.908525 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:14.911575 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:38:14.918331 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:38:14.921232 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:38:14.923749 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:38:14.926561 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:38:14.929463 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:38:14.929506 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:14.931506 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:14.934334 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:38:14.938375 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:38:14.964218 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:38:14.967568 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:38:14.970046 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:38:14.972101 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:38:14.974213 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:38:14.974246 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:38:15.007756 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 15:38:15.013744 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:38:15.021790 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:38:15.028771 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:38:15.033401 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:38:15.043322 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:38:15.045862 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:38:15.045916 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 15:38:15.049098 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 15:38:15.051645 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 15:38:15.053874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:15.064331 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:38:15.068210 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:38:15.072776 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:38:15.073097 KVP[1681]: KVP starting; pid is:1681 Feb 13 15:38:15.086321 jq[1679]: false Feb 13 15:38:15.086845 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:38:15.088339 (chronyd)[1675]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 15:38:15.092161 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:38:15.095078 KVP[1681]: KVP LIC Version: 3.1 Feb 13 15:38:15.095647 kernel: hv_utils: KVP IC version 4.0 Feb 13 15:38:15.106820 chronyd[1694]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 15:38:15.110952 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:38:15.114103 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:38:15.114796 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:38:15.116137 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:38:15.129815 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:38:15.143114 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:38:15.143336 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:38:15.148222 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:38:15.148445 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:38:15.154121 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Feb 13 15:38:15.154406 chronyd[1694]: Loaded seccomp filter (level 2) Feb 13 15:38:15.156040 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 15:38:15.186587 jq[1696]: true Feb 13 15:38:15.206635 extend-filesystems[1680]: Found loop4 Feb 13 15:38:15.206635 extend-filesystems[1680]: Found loop5 Feb 13 15:38:15.210483 extend-filesystems[1680]: Found loop6 Feb 13 15:38:15.212408 extend-filesystems[1680]: Found loop7 Feb 13 15:38:15.214190 extend-filesystems[1680]: Found sda Feb 13 15:38:15.215924 extend-filesystems[1680]: Found sda1 Feb 13 15:38:15.217547 extend-filesystems[1680]: Found sda2 Feb 13 15:38:15.222817 extend-filesystems[1680]: Found sda3 Feb 13 15:38:15.222817 extend-filesystems[1680]: Found usr Feb 13 15:38:15.222817 extend-filesystems[1680]: Found sda4 Feb 13 15:38:15.222817 extend-filesystems[1680]: Found sda6 Feb 13 15:38:15.222817 extend-filesystems[1680]: Found sda7 Feb 13 15:38:15.222817 extend-filesystems[1680]: Found sda9 Feb 13 15:38:15.222817 extend-filesystems[1680]: Checking size of /dev/sda9 Feb 13 15:38:15.221457 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:38:15.276141 dbus-daemon[1678]: [system] SELinux support is enabled Feb 13 15:38:15.284078 tar[1703]: linux-amd64/helm Feb 13 15:38:15.239326 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:38:15.239560 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:38:15.291449 jq[1716]: true Feb 13 15:38:15.265525 systemd-logind[1692]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:38:15.305063 extend-filesystems[1680]: Old size kept for /dev/sda9 Feb 13 15:38:15.305063 extend-filesystems[1680]: Found sr0 Feb 13 15:38:15.269810 systemd-logind[1692]: New seat seat0. Feb 13 15:38:15.274756 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:38:15.284501 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:38:15.292288 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:38:15.295954 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:38:15.296035 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:38:15.301318 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:38:15.301342 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:38:15.306985 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:38:15.307221 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:38:15.318792 dbus-daemon[1678]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:38:15.326232 update_engine[1695]: I20250213 15:38:15.326117 1695 main.cc:92] Flatcar Update Engine starting Feb 13 15:38:15.334659 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:38:15.337757 update_engine[1695]: I20250213 15:38:15.335417 1695 update_check_scheduler.cc:74] Next update check in 7m10s Feb 13 15:38:15.348922 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:38:15.433116 bash[1749]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:38:15.437232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:38:15.443467 coreos-metadata[1677]: Feb 13 15:38:15.441 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:38:15.443358 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:38:15.449123 coreos-metadata[1677]: Feb 13 15:38:15.447 INFO Fetch successful Feb 13 15:38:15.449123 coreos-metadata[1677]: Feb 13 15:38:15.447 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 15:38:15.453222 coreos-metadata[1677]: Feb 13 15:38:15.452 INFO Fetch successful Feb 13 15:38:15.453222 coreos-metadata[1677]: Feb 13 15:38:15.452 INFO Fetching http://168.63.129.16/machine/e70142df-bfa1-4680-9583-b8072836f537/8ec33695%2D817d%2D43c2%2Db3df%2Da18d13fc3d7a.%5Fci%2D4152.2.1%2Da%2D02a9d39241?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 15:38:15.454870 coreos-metadata[1677]: Feb 13 15:38:15.454 INFO Fetch successful Feb 13 15:38:15.456823 coreos-metadata[1677]: Feb 13 15:38:15.456 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:38:15.471992 coreos-metadata[1677]: Feb 13 15:38:15.471 INFO Fetch successful Feb 13 15:38:15.559795 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:38:15.565408 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:38:15.608656 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1759) Feb 13 15:38:15.708204 locksmithd[1750]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:38:16.110223 sshd_keygen[1719]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:38:16.186117 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:38:16.204180 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:38:16.213977 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 15:38:16.238551 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:38:16.239152 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:38:16.262048 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:38:16.273791 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 15:38:16.307997 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:38:16.322084 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:38:16.332432 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:38:16.338932 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:38:16.394168 tar[1703]: linux-amd64/LICENSE Feb 13 15:38:16.395303 tar[1703]: linux-amd64/README.md Feb 13 15:38:16.409149 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:38:16.490643 containerd[1720]: time="2025-02-13T15:38:16.489892000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:38:16.528374 containerd[1720]: time="2025-02-13T15:38:16.527475700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529376600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529414800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529438600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529605400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529642100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529714600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529730700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529934300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529952700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529972500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530124 containerd[1720]: time="2025-02-13T15:38:16.529985300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530551 containerd[1720]: time="2025-02-13T15:38:16.530076800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530551 containerd[1720]: time="2025-02-13T15:38:16.530320500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530551 containerd[1720]: time="2025-02-13T15:38:16.530475700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:16.530551 containerd[1720]: time="2025-02-13T15:38:16.530496300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:38:16.530733 containerd[1720]: time="2025-02-13T15:38:16.530597400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:38:16.530733 containerd[1720]: time="2025-02-13T15:38:16.530684900Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:38:16.543879 containerd[1720]: time="2025-02-13T15:38:16.543671400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:38:16.543879 containerd[1720]: time="2025-02-13T15:38:16.543731700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:38:16.543879 containerd[1720]: time="2025-02-13T15:38:16.543754900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:38:16.543879 containerd[1720]: time="2025-02-13T15:38:16.543777200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:38:16.543879 containerd[1720]: time="2025-02-13T15:38:16.543797100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:38:16.544104 containerd[1720]: time="2025-02-13T15:38:16.543961200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544266900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544397000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544417100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544437400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544456300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544474500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544492600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544512000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544531200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544548000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544566000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544583300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544623600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.545671 containerd[1720]: time="2025-02-13T15:38:16.544643400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544660300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544691800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544710100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544729900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544748000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544766100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544783700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544804800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544821400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544838300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544855100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544876600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544914100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544935600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546408 containerd[1720]: time="2025-02-13T15:38:16.544951300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545004000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545024800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545040000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545057900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545070900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545088400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545101000Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:38:16.546968 containerd[1720]: time="2025-02-13T15:38:16.545115100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:38:16.547240 containerd[1720]: time="2025-02-13T15:38:16.545502900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:38:16.547240 containerd[1720]: time="2025-02-13T15:38:16.545572000Z" level=info msg="Connect containerd service" Feb 13 15:38:16.547240 containerd[1720]: time="2025-02-13T15:38:16.545831300Z" level=info msg="using legacy CRI server" Feb 13 15:38:16.547240 containerd[1720]: time="2025-02-13T15:38:16.545853500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:38:16.547240 containerd[1720]: time="2025-02-13T15:38:16.546065100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.549547100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.549910400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.549962700Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.549997000Z" level=info msg="Start subscribing containerd event" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.550037600Z" level=info msg="Start recovering state" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.550105400Z" level=info msg="Start event monitor" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.550122000Z" level=info msg="Start snapshots syncer" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.550133100Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.550143500Z" level=info msg="Start streaming server" Feb 13 15:38:16.551628 containerd[1720]: time="2025-02-13T15:38:16.550201000Z" level=info msg="containerd successfully booted in 0.061369s" Feb 13 15:38:16.550906 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:38:16.781837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:16.785193 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:38:16.787715 systemd[1]: Startup finished in 850ms (firmware) + 30.360s (loader) + 930ms (kernel) + 12.067s (initrd) + 11.922s (userspace) = 56.131s. Feb 13 15:38:16.794557 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:17.051389 login[1852]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 15:38:17.054068 login[1853]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 15:38:17.064938 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:38:17.071061 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:38:17.074795 systemd-logind[1692]: New session 2 of user core. Feb 13 15:38:17.106256 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:38:17.113766 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:38:17.125198 (systemd)[1877]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:38:17.318149 systemd[1877]: Queued start job for default target default.target. Feb 13 15:38:17.324221 systemd[1877]: Created slice app.slice - User Application Slice. Feb 13 15:38:17.324260 systemd[1877]: Reached target paths.target - Paths. Feb 13 15:38:17.324278 systemd[1877]: Reached target timers.target - Timers. Feb 13 15:38:17.325911 systemd[1877]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:38:17.344479 systemd[1877]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:38:17.344556 systemd[1877]: Reached target sockets.target - Sockets. Feb 13 15:38:17.344575 systemd[1877]: Reached target basic.target - Basic System. Feb 13 15:38:17.344676 systemd[1877]: Reached target default.target - Main User Target. Feb 13 15:38:17.344722 systemd[1877]: Startup finished in 211ms. Feb 13 15:38:17.345460 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:38:17.352797 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:38:17.558443 kubelet[1866]: E0213 15:38:17.558353 1866 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:17.561196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:17.561389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:18.052271 login[1852]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 15:38:18.057333 systemd-logind[1692]: New session 1 of user core. Feb 13 15:38:18.061833 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:38:18.112452 waagent[1850]: 2025-02-13T15:38:18.109286Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 15:38:18.112452 waagent[1850]: 2025-02-13T15:38:18.112163Z INFO Daemon Daemon OS: flatcar 4152.2.1 Feb 13 15:38:18.114385 waagent[1850]: 2025-02-13T15:38:18.114324Z INFO Daemon Daemon Python: 3.11.10 Feb 13 15:38:18.116447 waagent[1850]: 2025-02-13T15:38:18.116367Z INFO Daemon Daemon Run daemon Feb 13 15:38:18.118538 waagent[1850]: 2025-02-13T15:38:18.118487Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.1' Feb 13 15:38:18.123154 waagent[1850]: 2025-02-13T15:38:18.123044Z INFO Daemon Daemon Using waagent for provisioning Feb 13 15:38:18.130073 waagent[1850]: 2025-02-13T15:38:18.123377Z INFO Daemon Daemon Activate resource disk Feb 13 15:38:18.130073 waagent[1850]: 2025-02-13T15:38:18.124374Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 15:38:18.135010 waagent[1850]: 2025-02-13T15:38:18.131201Z INFO Daemon Daemon Found device: None Feb 13 15:38:18.135010 waagent[1850]: 2025-02-13T15:38:18.131486Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 15:38:18.135010 waagent[1850]: 2025-02-13T15:38:18.132579Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 15:38:18.135010 waagent[1850]: 2025-02-13T15:38:18.133398Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:38:18.135010 waagent[1850]: 2025-02-13T15:38:18.134124Z INFO Daemon Daemon Running default provisioning handler Feb 13 15:38:18.143430 waagent[1850]: 2025-02-13T15:38:18.142713Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 15:38:18.145709 waagent[1850]: 2025-02-13T15:38:18.143806Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 15:38:18.145709 waagent[1850]: 2025-02-13T15:38:18.144326Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 15:38:18.145709 waagent[1850]: 2025-02-13T15:38:18.145005Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 15:38:18.223831 waagent[1850]: 2025-02-13T15:38:18.223701Z INFO Daemon Daemon Successfully mounted dvd Feb 13 15:38:18.238489 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 15:38:18.240555 waagent[1850]: 2025-02-13T15:38:18.240482Z INFO Daemon Daemon Detect protocol endpoint Feb 13 15:38:18.253904 waagent[1850]: 2025-02-13T15:38:18.240913Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:38:18.253904 waagent[1850]: 2025-02-13T15:38:18.242371Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 15:38:18.253904 waagent[1850]: 2025-02-13T15:38:18.243233Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 15:38:18.253904 waagent[1850]: 2025-02-13T15:38:18.244107Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 15:38:18.253904 waagent[1850]: 2025-02-13T15:38:18.244813Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 15:38:18.293907 waagent[1850]: 2025-02-13T15:38:18.293839Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 15:38:18.301514 waagent[1850]: 2025-02-13T15:38:18.294398Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 15:38:18.301514 waagent[1850]: 2025-02-13T15:38:18.295231Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 15:38:18.398875 waagent[1850]: 2025-02-13T15:38:18.398764Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 15:38:18.402181 waagent[1850]: 2025-02-13T15:38:18.402109Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 15:38:18.408115 waagent[1850]: 2025-02-13T15:38:18.408061Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:38:18.516754 waagent[1850]: 2025-02-13T15:38:18.516664Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 15:38:18.533441 waagent[1850]: 2025-02-13T15:38:18.517722Z INFO Daemon Feb 13 15:38:18.533441 waagent[1850]: 2025-02-13T15:38:18.518452Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4cb069b8-c494-435e-86a7-aec81b1a9e69 eTag: 11733615552753243328 source: Fabric] Feb 13 15:38:18.533441 waagent[1850]: 2025-02-13T15:38:18.519730Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 15:38:18.533441 waagent[1850]: 2025-02-13T15:38:18.520955Z INFO Daemon Feb 13 15:38:18.533441 waagent[1850]: 2025-02-13T15:38:18.521524Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:38:18.533441 waagent[1850]: 2025-02-13T15:38:18.526675Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 15:38:18.604691 waagent[1850]: 2025-02-13T15:38:18.604596Z INFO Daemon Downloaded certificate {'thumbprint': '89DD1EB890E6800AA26811E915EE8AA583FB6997', 'hasPrivateKey': True} Feb 13 15:38:18.614630 waagent[1850]: 2025-02-13T15:38:18.605284Z INFO Daemon Downloaded certificate {'thumbprint': '0EBD5B98E0C0F5CC693BC50DC98F4D2CB4FB5CF8', 'hasPrivateKey': False} Feb 13 15:38:18.614630 waagent[1850]: 2025-02-13T15:38:18.606470Z INFO Daemon Fetch goal state completed Feb 13 15:38:18.620508 waagent[1850]: 2025-02-13T15:38:18.620462Z INFO Daemon Daemon Starting provisioning Feb 13 15:38:18.626501 waagent[1850]: 2025-02-13T15:38:18.620666Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 15:38:18.626501 waagent[1850]: 2025-02-13T15:38:18.621547Z INFO Daemon Daemon Set hostname [ci-4152.2.1-a-02a9d39241] Feb 13 15:38:18.641811 waagent[1850]: 2025-02-13T15:38:18.639565Z INFO Daemon Daemon Publish hostname [ci-4152.2.1-a-02a9d39241] Feb 13 15:38:18.641811 waagent[1850]: 2025-02-13T15:38:18.640029Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 15:38:18.641811 waagent[1850]: 2025-02-13T15:38:18.641147Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 15:38:18.669727 systemd-networkd[1324]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:18.669738 systemd-networkd[1324]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:18.669790 systemd-networkd[1324]: eth0: DHCP lease lost Feb 13 15:38:18.671134 waagent[1850]: 2025-02-13T15:38:18.671061Z INFO Daemon Daemon Create user account if not exists Feb 13 15:38:18.686332 waagent[1850]: 2025-02-13T15:38:18.671396Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 15:38:18.686332 waagent[1850]: 2025-02-13T15:38:18.672245Z INFO Daemon Daemon Configure sudoer Feb 13 15:38:18.686332 waagent[1850]: 2025-02-13T15:38:18.673449Z INFO Daemon Daemon Configure sshd Feb 13 15:38:18.686332 waagent[1850]: 2025-02-13T15:38:18.674244Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 15:38:18.686332 waagent[1850]: 2025-02-13T15:38:18.674908Z INFO Daemon Daemon Deploy ssh public key. Feb 13 15:38:18.686718 systemd-networkd[1324]: eth0: DHCPv6 lease lost Feb 13 15:38:18.722697 systemd-networkd[1324]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:38:19.809996 waagent[1850]: 2025-02-13T15:38:19.809914Z INFO Daemon Daemon Provisioning complete Feb 13 15:38:19.823037 waagent[1850]: 2025-02-13T15:38:19.822976Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 15:38:19.829071 waagent[1850]: 2025-02-13T15:38:19.823356Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 15:38:19.829071 waagent[1850]: 2025-02-13T15:38:19.824228Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 15:38:19.950116 waagent[1934]: 2025-02-13T15:38:19.950013Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 15:38:19.950566 waagent[1934]: 2025-02-13T15:38:19.950184Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.1 Feb 13 15:38:19.950566 waagent[1934]: 2025-02-13T15:38:19.950267Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 15:38:19.987339 waagent[1934]: 2025-02-13T15:38:19.987238Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 15:38:19.987599 waagent[1934]: 2025-02-13T15:38:19.987539Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:38:19.987746 waagent[1934]: 2025-02-13T15:38:19.987690Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:38:19.996275 waagent[1934]: 2025-02-13T15:38:19.996200Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:38:20.001131 waagent[1934]: 2025-02-13T15:38:20.001080Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 15:38:20.001566 waagent[1934]: 2025-02-13T15:38:20.001517Z INFO ExtHandler Feb 13 15:38:20.001670 waagent[1934]: 2025-02-13T15:38:20.001606Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3beb0bbb-9a0a-4fb0-9dc0-e6eeb80fd399 eTag: 11733615552753243328 source: Fabric] Feb 13 15:38:20.002004 waagent[1934]: 2025-02-13T15:38:20.001953Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 15:38:20.002548 waagent[1934]: 2025-02-13T15:38:20.002491Z INFO ExtHandler Feb 13 15:38:20.002638 waagent[1934]: 2025-02-13T15:38:20.002573Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:38:20.005749 waagent[1934]: 2025-02-13T15:38:20.005709Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 15:38:20.069833 waagent[1934]: 2025-02-13T15:38:20.069678Z INFO ExtHandler Downloaded certificate {'thumbprint': '89DD1EB890E6800AA26811E915EE8AA583FB6997', 'hasPrivateKey': True} Feb 13 15:38:20.070263 waagent[1934]: 2025-02-13T15:38:20.070203Z INFO ExtHandler Downloaded certificate {'thumbprint': '0EBD5B98E0C0F5CC693BC50DC98F4D2CB4FB5CF8', 'hasPrivateKey': False} Feb 13 15:38:20.070756 waagent[1934]: 2025-02-13T15:38:20.070705Z INFO ExtHandler Fetch goal state completed Feb 13 15:38:20.085594 waagent[1934]: 2025-02-13T15:38:20.085519Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1934 Feb 13 15:38:20.085779 waagent[1934]: 2025-02-13T15:38:20.085726Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 15:38:20.087399 waagent[1934]: 2025-02-13T15:38:20.087340Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 15:38:20.087805 waagent[1934]: 2025-02-13T15:38:20.087757Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 15:38:20.142720 waagent[1934]: 2025-02-13T15:38:20.142659Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 15:38:20.143004 waagent[1934]: 2025-02-13T15:38:20.142941Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 15:38:20.151235 waagent[1934]: 2025-02-13T15:38:20.151164Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 15:38:20.158261 systemd[1]: Reloading requested from client PID 1949 ('systemctl') (unit waagent.service)... Feb 13 15:38:20.158278 systemd[1]: Reloading... Feb 13 15:38:20.252641 zram_generator::config[1984]: No configuration found. Feb 13 15:38:20.377626 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:20.460090 systemd[1]: Reloading finished in 301 ms. Feb 13 15:38:20.487631 waagent[1934]: 2025-02-13T15:38:20.484156Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 15:38:20.494202 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit waagent.service)... Feb 13 15:38:20.494217 systemd[1]: Reloading... Feb 13 15:38:20.567659 zram_generator::config[2071]: No configuration found. Feb 13 15:38:20.691393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:20.773871 systemd[1]: Reloading finished in 279 ms. Feb 13 15:38:20.801040 waagent[1934]: 2025-02-13T15:38:20.800918Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 15:38:20.801396 waagent[1934]: 2025-02-13T15:38:20.801149Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 15:38:21.842658 waagent[1934]: 2025-02-13T15:38:21.842530Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 15:38:21.843588 waagent[1934]: 2025-02-13T15:38:21.843507Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 15:38:21.845910 waagent[1934]: 2025-02-13T15:38:21.845834Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 15:38:21.846561 waagent[1934]: 2025-02-13T15:38:21.846478Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 15:38:21.846792 waagent[1934]: 2025-02-13T15:38:21.846739Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:38:21.846888 waagent[1934]: 2025-02-13T15:38:21.846826Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:38:21.847008 waagent[1934]: 2025-02-13T15:38:21.846952Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:38:21.847664 waagent[1934]: 2025-02-13T15:38:21.847558Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 15:38:21.847812 waagent[1934]: 2025-02-13T15:38:21.847752Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 15:38:21.847868 waagent[1934]: 2025-02-13T15:38:21.847812Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 15:38:21.848073 waagent[1934]: 2025-02-13T15:38:21.848036Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:38:21.848716 waagent[1934]: 2025-02-13T15:38:21.848667Z INFO EnvHandler ExtHandler Configure routes Feb 13 15:38:21.848829 waagent[1934]: 2025-02-13T15:38:21.848764Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 15:38:21.848829 waagent[1934]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 15:38:21.848829 waagent[1934]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 15:38:21.848829 waagent[1934]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 15:38:21.848829 waagent[1934]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:38:21.848829 waagent[1934]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:38:21.848829 waagent[1934]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:38:21.849084 waagent[1934]: 2025-02-13T15:38:21.848952Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 15:38:21.849123 waagent[1934]: 2025-02-13T15:38:21.849091Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 15:38:21.849730 waagent[1934]: 2025-02-13T15:38:21.849690Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 15:38:21.850220 waagent[1934]: 2025-02-13T15:38:21.850179Z INFO EnvHandler ExtHandler Gateway:None Feb 13 15:38:21.850451 waagent[1934]: 2025-02-13T15:38:21.850402Z INFO EnvHandler ExtHandler Routes:None Feb 13 15:38:21.857324 waagent[1934]: 2025-02-13T15:38:21.857272Z INFO ExtHandler ExtHandler Feb 13 15:38:21.857424 waagent[1934]: 2025-02-13T15:38:21.857374Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 16748400-417f-4fe7-a17b-5fd13322bf70 correlation 7d398bed-191d-4701-a568-f2bc68849451 created: 2025-02-13T15:37:09.976181Z] Feb 13 15:38:21.858372 waagent[1934]: 2025-02-13T15:38:21.858330Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 15:38:21.858942 waagent[1934]: 2025-02-13T15:38:21.858898Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 15:38:21.915780 waagent[1934]: 2025-02-13T15:38:21.915694Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 15:38:21.915780 waagent[1934]: Executing ['ip', '-a', '-o', 'link']: Feb 13 15:38:21.915780 waagent[1934]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 15:38:21.915780 waagent[1934]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:bf:df brd ff:ff:ff:ff:ff:ff Feb 13 15:38:21.915780 waagent[1934]: 3: enP4093s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:bf:df brd ff:ff:ff:ff:ff:ff\ altname enP4093p0s2 Feb 13 15:38:21.915780 waagent[1934]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 15:38:21.915780 waagent[1934]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 15:38:21.915780 waagent[1934]: 2: eth0 inet 10.200.8.20/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 15:38:21.915780 waagent[1934]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 15:38:21.915780 waagent[1934]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 15:38:21.915780 waagent[1934]: 2: eth0 inet6 fe80::6245:bdff:fee0:bfdf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:38:21.915780 waagent[1934]: 3: enP4093s1 inet6 fe80::6245:bdff:fee0:bfdf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:38:21.967219 waagent[1934]: 2025-02-13T15:38:21.967138Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 15:38:21.967219 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:38:21.967219 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 15:38:21.967219 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:38:21.967219 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 15:38:21.967219 waagent[1934]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:38:21.967219 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 15:38:21.967219 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:38:21.967219 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:38:21.967219 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:38:21.970512 waagent[1934]: 2025-02-13T15:38:21.970448Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 15:38:21.970512 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:38:21.970512 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 15:38:21.970512 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:38:21.970512 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 15:38:21.970512 waagent[1934]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:38:21.970512 waagent[1934]: pkts bytes target prot opt in out source destination Feb 13 15:38:21.970512 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:38:21.970512 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:38:21.970512 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:38:21.970998 waagent[1934]: 2025-02-13T15:38:21.970803Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 15:38:22.104636 waagent[1934]: 2025-02-13T15:38:22.104490Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C18F0DF9-5DC8-4459-A884-F528E02D5919;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 15:38:27.606321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:38:27.611893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:27.707723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:27.712223 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:28.299869 kubelet[2170]: E0213 15:38:28.299800 2170 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:28.304201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:28.304415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:38.356517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:38:38.366846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:38.468483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:38.478932 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:38.944435 chronyd[1694]: Selected source PHC0 Feb 13 15:38:39.019992 kubelet[2186]: E0213 15:38:39.019905 2186 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:39.022785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:39.022998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:47.126544 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:38:47.133889 systemd[1]: Started sshd@0-10.200.8.20:22-10.200.16.10:57626.service - OpenSSH per-connection server daemon (10.200.16.10:57626). Feb 13 15:38:47.897366 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 57626 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:38:47.899112 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:47.903956 systemd-logind[1692]: New session 3 of user core. Feb 13 15:38:47.910774 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:38:48.449908 systemd[1]: Started sshd@1-10.200.8.20:22-10.200.16.10:57628.service - OpenSSH per-connection server daemon (10.200.16.10:57628). Feb 13 15:38:49.078562 sshd[2201]: Accepted publickey for core from 10.200.16.10 port 57628 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:38:49.080291 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:49.081419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:38:49.087867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:49.092475 systemd-logind[1692]: New session 4 of user core. Feb 13 15:38:49.097010 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:38:49.225287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:49.230202 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:49.524269 sshd[2206]: Connection closed by 10.200.16.10 port 57628 Feb 13 15:38:49.525159 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:49.529182 systemd[1]: sshd@1-10.200.8.20:22-10.200.16.10:57628.service: Deactivated successfully. Feb 13 15:38:49.531102 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:38:49.531863 systemd-logind[1692]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:38:49.532899 systemd-logind[1692]: Removed session 4. Feb 13 15:38:49.636024 systemd[1]: Started sshd@2-10.200.8.20:22-10.200.16.10:56074.service - OpenSSH per-connection server daemon (10.200.16.10:56074). Feb 13 15:38:49.755240 kubelet[2212]: E0213 15:38:49.755177 2212 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:49.758137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:49.758332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:50.276135 sshd[2222]: Accepted publickey for core from 10.200.16.10 port 56074 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:38:50.285769 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:50.290104 systemd-logind[1692]: New session 5 of user core. Feb 13 15:38:50.299757 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:38:50.718517 sshd[2226]: Connection closed by 10.200.16.10 port 56074 Feb 13 15:38:50.719393 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:50.723848 systemd[1]: sshd@2-10.200.8.20:22-10.200.16.10:56074.service: Deactivated successfully. Feb 13 15:38:50.726130 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:38:50.727031 systemd-logind[1692]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:38:50.728146 systemd-logind[1692]: Removed session 5. Feb 13 15:38:50.832601 systemd[1]: Started sshd@3-10.200.8.20:22-10.200.16.10:56080.service - OpenSSH per-connection server daemon (10.200.16.10:56080). Feb 13 15:38:51.469829 sshd[2231]: Accepted publickey for core from 10.200.16.10 port 56080 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:38:51.471490 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:51.476099 systemd-logind[1692]: New session 6 of user core. Feb 13 15:38:51.482766 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:38:51.915411 sshd[2233]: Connection closed by 10.200.16.10 port 56080 Feb 13 15:38:51.916331 sshd-session[2231]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:51.920800 systemd[1]: sshd@3-10.200.8.20:22-10.200.16.10:56080.service: Deactivated successfully. Feb 13 15:38:51.922994 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:38:51.923901 systemd-logind[1692]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:38:51.924824 systemd-logind[1692]: Removed session 6. Feb 13 15:38:52.029532 systemd[1]: Started sshd@4-10.200.8.20:22-10.200.16.10:56086.service - OpenSSH per-connection server daemon (10.200.16.10:56086). Feb 13 15:38:52.667678 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 56086 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:38:52.669325 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:52.674035 systemd-logind[1692]: New session 7 of user core. Feb 13 15:38:52.683771 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:38:53.162683 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:38:53.163064 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:53.189154 sudo[2241]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:53.290159 sshd[2240]: Connection closed by 10.200.16.10 port 56086 Feb 13 15:38:53.291442 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:53.296200 systemd[1]: sshd@4-10.200.8.20:22-10.200.16.10:56086.service: Deactivated successfully. Feb 13 15:38:53.298394 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:38:53.299360 systemd-logind[1692]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:38:53.300478 systemd-logind[1692]: Removed session 7. Feb 13 15:38:53.400691 systemd[1]: Started sshd@5-10.200.8.20:22-10.200.16.10:56092.service - OpenSSH per-connection server daemon (10.200.16.10:56092). Feb 13 15:38:54.036042 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 56092 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:38:54.037816 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:54.043424 systemd-logind[1692]: New session 8 of user core. Feb 13 15:38:54.049767 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:38:54.380688 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:38:54.381048 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:54.384401 sudo[2250]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:54.389347 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:38:54.389712 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:54.405028 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:38:54.431325 augenrules[2272]: No rules Feb 13 15:38:54.432772 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:38:54.433015 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:38:54.434249 sudo[2249]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:54.536452 sshd[2248]: Connection closed by 10.200.16.10 port 56092 Feb 13 15:38:54.537292 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:54.541960 systemd[1]: sshd@5-10.200.8.20:22-10.200.16.10:56092.service: Deactivated successfully. Feb 13 15:38:54.544165 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:38:54.544942 systemd-logind[1692]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:38:54.545910 systemd-logind[1692]: Removed session 8. Feb 13 15:38:54.651175 systemd[1]: Started sshd@6-10.200.8.20:22-10.200.16.10:56106.service - OpenSSH per-connection server daemon (10.200.16.10:56106). Feb 13 15:38:55.277952 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 56106 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:38:55.279731 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:55.285493 systemd-logind[1692]: New session 9 of user core. Feb 13 15:38:55.294778 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:38:55.622271 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:38:55.622726 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:57.428003 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:38:57.430069 (dockerd)[2300]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:38:58.284488 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 13 15:38:59.184929 dockerd[2300]: time="2025-02-13T15:38:59.184862818Z" level=info msg="Starting up" Feb 13 15:38:59.675832 dockerd[2300]: time="2025-02-13T15:38:59.675784068Z" level=info msg="Loading containers: start." Feb 13 15:38:59.856042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:38:59.861874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:00.023786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:00.034933 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:00.534643 kernel: Initializing XFRM netlink socket Feb 13 15:39:00.543827 kubelet[2410]: E0213 15:39:00.543771 2410 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:00.546481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:00.546725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:00.679802 systemd-networkd[1324]: docker0: Link UP Feb 13 15:39:00.734717 dockerd[2300]: time="2025-02-13T15:39:00.734672422Z" level=info msg="Loading containers: done." Feb 13 15:39:00.868020 update_engine[1695]: I20250213 15:39:00.867933 1695 update_attempter.cc:509] Updating boot flags... Feb 13 15:39:00.873723 dockerd[2300]: time="2025-02-13T15:39:00.873592445Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:39:00.873860 dockerd[2300]: time="2025-02-13T15:39:00.873815551Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:39:00.874002 dockerd[2300]: time="2025-02-13T15:39:00.873979555Z" level=info msg="Daemon has completed initialization" Feb 13 15:39:00.921720 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2481) Feb 13 15:39:00.988441 dockerd[2300]: time="2025-02-13T15:39:00.987699839Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:39:00.989160 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:39:02.792715 containerd[1720]: time="2025-02-13T15:39:02.792676214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:39:03.474000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765763391.mount: Deactivated successfully. Feb 13 15:39:05.394605 containerd[1720]: time="2025-02-13T15:39:05.394538551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:05.397669 containerd[1720]: time="2025-02-13T15:39:05.397595224Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142291" Feb 13 15:39:05.404849 containerd[1720]: time="2025-02-13T15:39:05.404789897Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:05.412936 containerd[1720]: time="2025-02-13T15:39:05.412893692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:05.414874 containerd[1720]: time="2025-02-13T15:39:05.413911516Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 2.621189401s" Feb 13 15:39:05.414874 containerd[1720]: time="2025-02-13T15:39:05.413955317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:39:05.436517 containerd[1720]: time="2025-02-13T15:39:05.436481758Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:39:07.468291 containerd[1720]: time="2025-02-13T15:39:07.468221347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:07.475371 containerd[1720]: time="2025-02-13T15:39:07.475289517Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213172" Feb 13 15:39:07.478606 containerd[1720]: time="2025-02-13T15:39:07.478543195Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:07.484761 containerd[1720]: time="2025-02-13T15:39:07.484713643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:07.485698 containerd[1720]: time="2025-02-13T15:39:07.485665566Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 2.049150307s" Feb 13 15:39:07.485698 containerd[1720]: time="2025-02-13T15:39:07.485698367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:39:07.509535 containerd[1720]: time="2025-02-13T15:39:07.509485138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:39:08.874879 containerd[1720]: time="2025-02-13T15:39:08.874815724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:08.877949 containerd[1720]: time="2025-02-13T15:39:08.877876598Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334064" Feb 13 15:39:08.883362 containerd[1720]: time="2025-02-13T15:39:08.883304228Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:08.888499 containerd[1720]: time="2025-02-13T15:39:08.888438151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:08.889659 containerd[1720]: time="2025-02-13T15:39:08.889437175Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.379908836s" Feb 13 15:39:08.889659 containerd[1720]: time="2025-02-13T15:39:08.889476876Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:39:08.912945 containerd[1720]: time="2025-02-13T15:39:08.912901439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:39:10.099048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745131664.mount: Deactivated successfully. Feb 13 15:39:10.560874 containerd[1720]: time="2025-02-13T15:39:10.560804410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:10.563599 containerd[1720]: time="2025-02-13T15:39:10.563536676Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620600" Feb 13 15:39:10.568989 containerd[1720]: time="2025-02-13T15:39:10.568935406Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:10.574472 containerd[1720]: time="2025-02-13T15:39:10.574245133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:10.577625 containerd[1720]: time="2025-02-13T15:39:10.576470687Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.663524347s" Feb 13 15:39:10.577625 containerd[1720]: time="2025-02-13T15:39:10.576514088Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:39:10.602291 containerd[1720]: time="2025-02-13T15:39:10.602255706Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:39:10.606057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:39:10.611953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:10.712771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:10.717681 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:10.760054 kubelet[2660]: E0213 15:39:10.759989 2660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:10.763025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:10.763243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:11.875323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430002489.mount: Deactivated successfully. Feb 13 15:39:13.639141 containerd[1720]: time="2025-02-13T15:39:13.639069809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:13.643941 containerd[1720]: time="2025-02-13T15:39:13.643868028Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 13 15:39:13.649003 containerd[1720]: time="2025-02-13T15:39:13.648937154Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:13.656605 containerd[1720]: time="2025-02-13T15:39:13.656538943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:13.657999 containerd[1720]: time="2025-02-13T15:39:13.657623370Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.055317264s" Feb 13 15:39:13.657999 containerd[1720]: time="2025-02-13T15:39:13.657668672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:39:13.682271 containerd[1720]: time="2025-02-13T15:39:13.682231283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:39:14.276598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237193915.mount: Deactivated successfully. Feb 13 15:39:14.303201 containerd[1720]: time="2025-02-13T15:39:14.303131527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:14.305698 containerd[1720]: time="2025-02-13T15:39:14.305645290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Feb 13 15:39:14.312054 containerd[1720]: time="2025-02-13T15:39:14.311990448Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:14.321088 containerd[1720]: time="2025-02-13T15:39:14.321003372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:14.322299 containerd[1720]: time="2025-02-13T15:39:14.321729990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 639.455907ms" Feb 13 15:39:14.322299 containerd[1720]: time="2025-02-13T15:39:14.321770891Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:39:14.343415 containerd[1720]: time="2025-02-13T15:39:14.343370828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:39:14.984993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946896446.mount: Deactivated successfully. Feb 13 15:39:17.387882 containerd[1720]: time="2025-02-13T15:39:17.387816559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:17.389790 containerd[1720]: time="2025-02-13T15:39:17.389730606Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Feb 13 15:39:17.392749 containerd[1720]: time="2025-02-13T15:39:17.392692780Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:17.397664 containerd[1720]: time="2025-02-13T15:39:17.397625803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:17.398859 containerd[1720]: time="2025-02-13T15:39:17.398688929Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.05527s" Feb 13 15:39:17.398859 containerd[1720]: time="2025-02-13T15:39:17.398727630Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:39:20.856354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:39:20.865719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:21.539672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:21.551003 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:21.565072 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:21.567363 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:21.567644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:21.576418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:21.601035 systemd[1]: Reloading requested from client PID 2850 ('systemctl') (unit session-9.scope)... Feb 13 15:39:21.601049 systemd[1]: Reloading... Feb 13 15:39:21.689699 zram_generator::config[2889]: No configuration found. Feb 13 15:39:21.820203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:21.908226 systemd[1]: Reloading finished in 306 ms. Feb 13 15:39:21.986393 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:39:21.986532 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:39:21.986887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:21.992000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:22.629165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:22.635120 (kubelet)[2957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:23.258550 kubelet[2957]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:23.258550 kubelet[2957]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:23.258550 kubelet[2957]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:23.259105 kubelet[2957]: I0213 15:39:23.258625 2957 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:23.561796 kubelet[2957]: I0213 15:39:23.559596 2957 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:39:23.561796 kubelet[2957]: I0213 15:39:23.559638 2957 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:23.561796 kubelet[2957]: I0213 15:39:23.559870 2957 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:39:23.579300 kubelet[2957]: E0213 15:39:23.579252 2957 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.580826 kubelet[2957]: I0213 15:39:23.580150 2957 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:23.592779 kubelet[2957]: I0213 15:39:23.592748 2957 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:23.593056 kubelet[2957]: I0213 15:39:23.593036 2957 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:23.593247 kubelet[2957]: I0213 15:39:23.593226 2957 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:39:23.594029 kubelet[2957]: I0213 15:39:23.594005 2957 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:23.594103 kubelet[2957]: I0213 15:39:23.594032 2957 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:39:23.594199 kubelet[2957]: I0213 15:39:23.594164 2957 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:23.594312 kubelet[2957]: I0213 15:39:23.594296 2957 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:39:23.594567 kubelet[2957]: I0213 15:39:23.594319 2957 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:23.594567 kubelet[2957]: I0213 15:39:23.594355 2957 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:39:23.594567 kubelet[2957]: I0213 15:39:23.594370 2957 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:23.596491 kubelet[2957]: I0213 15:39:23.596197 2957 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:23.599751 kubelet[2957]: I0213 15:39:23.599596 2957 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:23.601109 kubelet[2957]: W0213 15:39:23.600791 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.601109 kubelet[2957]: E0213 15:39:23.600868 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.601109 kubelet[2957]: W0213 15:39:23.600955 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-02a9d39241&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.601109 kubelet[2957]: E0213 15:39:23.600992 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-02a9d39241&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.601109 kubelet[2957]: W0213 15:39:23.601010 2957 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:39:23.602018 kubelet[2957]: I0213 15:39:23.601670 2957 server.go:1256] "Started kubelet" Feb 13 15:39:23.602018 kubelet[2957]: I0213 15:39:23.601726 2957 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:23.603194 kubelet[2957]: I0213 15:39:23.602555 2957 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:39:23.604187 kubelet[2957]: I0213 15:39:23.603675 2957 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:23.604187 kubelet[2957]: I0213 15:39:23.603907 2957 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:23.607130 kubelet[2957]: I0213 15:39:23.606973 2957 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:23.609255 kubelet[2957]: E0213 15:39:23.609229 2957 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.1-a-02a9d39241.1823ceb6550ffa79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-02a9d39241,UID:ci-4152.2.1-a-02a9d39241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-02a9d39241,},FirstTimestamp:2025-02-13 15:39:23.601623673 +0000 UTC m=+0.962039297,LastTimestamp:2025-02-13 15:39:23.601623673 +0000 UTC m=+0.962039297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-02a9d39241,}" Feb 13 15:39:23.614119 kubelet[2957]: I0213 15:39:23.613956 2957 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:39:23.615383 kubelet[2957]: E0213 15:39:23.615364 2957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-02a9d39241?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="200ms" Feb 13 15:39:23.615951 kubelet[2957]: I0213 15:39:23.615935 2957 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:23.616123 kubelet[2957]: I0213 15:39:23.616104 2957 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:23.617037 kubelet[2957]: I0213 15:39:23.617017 2957 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:39:23.617175 kubelet[2957]: I0213 15:39:23.617134 2957 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:39:23.617998 kubelet[2957]: E0213 15:39:23.617981 2957 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:23.618299 kubelet[2957]: I0213 15:39:23.618283 2957 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:23.628466 kubelet[2957]: I0213 15:39:23.628440 2957 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:23.629523 kubelet[2957]: I0213 15:39:23.629509 2957 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:23.629636 kubelet[2957]: I0213 15:39:23.629606 2957 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:39:23.629713 kubelet[2957]: I0213 15:39:23.629692 2957 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:39:23.629767 kubelet[2957]: E0213 15:39:23.629748 2957 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:23.635801 kubelet[2957]: W0213 15:39:23.635724 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.635884 kubelet[2957]: E0213 15:39:23.635813 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.635958 kubelet[2957]: W0213 15:39:23.635914 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.636012 kubelet[2957]: E0213 15:39:23.635975 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:23.670895 kubelet[2957]: I0213 15:39:23.670863 2957 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:39:23.670895 kubelet[2957]: I0213 15:39:23.670886 2957 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:23.671070 kubelet[2957]: I0213 15:39:23.670916 2957 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:23.677097 kubelet[2957]: I0213 15:39:23.677063 2957 policy_none.go:49] "None policy: Start" Feb 13 15:39:23.677684 kubelet[2957]: I0213 15:39:23.677633 2957 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:39:23.677684 kubelet[2957]: I0213 15:39:23.677666 2957 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:23.688923 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:39:23.696435 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:39:23.699712 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:39:23.711464 kubelet[2957]: I0213 15:39:23.711338 2957 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:23.711680 kubelet[2957]: I0213 15:39:23.711660 2957 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:23.713261 kubelet[2957]: E0213 15:39:23.713155 2957 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.1-a-02a9d39241\" not found" Feb 13 15:39:23.716800 kubelet[2957]: I0213 15:39:23.716728 2957 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.717145 kubelet[2957]: E0213 15:39:23.717116 2957 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.730450 kubelet[2957]: I0213 15:39:23.730430 2957 topology_manager.go:215] "Topology Admit Handler" podUID="2c6a43a9d30ba7055f58753dee1eea2f" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.732045 kubelet[2957]: I0213 15:39:23.732022 2957 topology_manager.go:215] "Topology Admit Handler" podUID="b61e65a100906e6b470fb3c12140bdd8" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.733734 kubelet[2957]: I0213 15:39:23.733542 2957 topology_manager.go:215] "Topology Admit Handler" podUID="7491754ca6c5ffb93dcb30c1ab5f6cfd" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.740427 systemd[1]: Created slice kubepods-burstable-pod2c6a43a9d30ba7055f58753dee1eea2f.slice - libcontainer container kubepods-burstable-pod2c6a43a9d30ba7055f58753dee1eea2f.slice. Feb 13 15:39:23.759516 systemd[1]: Created slice kubepods-burstable-podb61e65a100906e6b470fb3c12140bdd8.slice - libcontainer container kubepods-burstable-podb61e65a100906e6b470fb3c12140bdd8.slice. Feb 13 15:39:23.769449 systemd[1]: Created slice kubepods-burstable-pod7491754ca6c5ffb93dcb30c1ab5f6cfd.slice - libcontainer container kubepods-burstable-pod7491754ca6c5ffb93dcb30c1ab5f6cfd.slice. Feb 13 15:39:23.817050 kubelet[2957]: E0213 15:39:23.816894 2957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-02a9d39241?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="400ms" Feb 13 15:39:23.919428 kubelet[2957]: I0213 15:39:23.919387 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c6a43a9d30ba7055f58753dee1eea2f-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" (UID: \"2c6a43a9d30ba7055f58753dee1eea2f\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919627 kubelet[2957]: I0213 15:39:23.919487 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c6a43a9d30ba7055f58753dee1eea2f-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" (UID: \"2c6a43a9d30ba7055f58753dee1eea2f\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919627 kubelet[2957]: I0213 15:39:23.919552 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c6a43a9d30ba7055f58753dee1eea2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" (UID: \"2c6a43a9d30ba7055f58753dee1eea2f\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919627 kubelet[2957]: I0213 15:39:23.919587 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919826 kubelet[2957]: I0213 15:39:23.919658 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7491754ca6c5ffb93dcb30c1ab5f6cfd-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-02a9d39241\" (UID: \"7491754ca6c5ffb93dcb30c1ab5f6cfd\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919826 kubelet[2957]: I0213 15:39:23.919738 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919826 kubelet[2957]: I0213 15:39:23.919805 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919980 kubelet[2957]: I0213 15:39:23.919879 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.919980 kubelet[2957]: I0213 15:39:23.919927 2957 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.920601 kubelet[2957]: I0213 15:39:23.920571 2957 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:23.921026 kubelet[2957]: E0213 15:39:23.920999 2957 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:24.059140 containerd[1720]: time="2025-02-13T15:39:24.059090826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-02a9d39241,Uid:2c6a43a9d30ba7055f58753dee1eea2f,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:24.068834 containerd[1720]: time="2025-02-13T15:39:24.068723569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-02a9d39241,Uid:b61e65a100906e6b470fb3c12140bdd8,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:24.072517 containerd[1720]: time="2025-02-13T15:39:24.072292259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-02a9d39241,Uid:7491754ca6c5ffb93dcb30c1ab5f6cfd,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:24.218506 kubelet[2957]: E0213 15:39:24.218428 2957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-02a9d39241?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="800ms" Feb 13 15:39:24.323466 kubelet[2957]: I0213 15:39:24.323327 2957 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:24.324129 kubelet[2957]: E0213 15:39:24.324031 2957 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:24.515572 kubelet[2957]: W0213 15:39:24.515492 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:24.515572 kubelet[2957]: E0213 15:39:24.515573 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:24.691668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2886997872.mount: Deactivated successfully. Feb 13 15:39:24.721757 containerd[1720]: time="2025-02-13T15:39:24.721696360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:24.737902 containerd[1720]: time="2025-02-13T15:39:24.737711964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 15:39:24.741096 containerd[1720]: time="2025-02-13T15:39:24.741052448Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:24.745528 containerd[1720]: time="2025-02-13T15:39:24.745486560Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:24.754686 containerd[1720]: time="2025-02-13T15:39:24.754366785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:24.758759 containerd[1720]: time="2025-02-13T15:39:24.758723095Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:24.763763 containerd[1720]: time="2025-02-13T15:39:24.763728821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:24.764531 containerd[1720]: time="2025-02-13T15:39:24.764489240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.276311ms" Feb 13 15:39:24.766157 containerd[1720]: time="2025-02-13T15:39:24.766115581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:24.770689 containerd[1720]: time="2025-02-13T15:39:24.770657996Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 701.838525ms" Feb 13 15:39:24.792204 containerd[1720]: time="2025-02-13T15:39:24.792148139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.768678ms" Feb 13 15:39:25.019447 kubelet[2957]: E0213 15:39:25.019316 2957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-02a9d39241?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="1.6s" Feb 13 15:39:25.070237 kubelet[2957]: W0213 15:39:25.070157 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-02a9d39241&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:25.070237 kubelet[2957]: E0213 15:39:25.070240 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-02a9d39241&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:25.115178 kubelet[2957]: W0213 15:39:25.115103 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:25.115178 kubelet[2957]: E0213 15:39:25.115181 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:25.126077 kubelet[2957]: I0213 15:39:25.126054 2957 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:25.126388 kubelet[2957]: E0213 15:39:25.126366 2957 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:25.167237 kubelet[2957]: W0213 15:39:25.167170 2957 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:25.167237 kubelet[2957]: E0213 15:39:25.167247 2957 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:25.442187 containerd[1720]: time="2025-02-13T15:39:25.441892248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:25.442187 containerd[1720]: time="2025-02-13T15:39:25.441965450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:25.442187 containerd[1720]: time="2025-02-13T15:39:25.441986950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:25.443577 containerd[1720]: time="2025-02-13T15:39:25.442392461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:25.443704 containerd[1720]: time="2025-02-13T15:39:25.443311084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:25.443704 containerd[1720]: time="2025-02-13T15:39:25.443379285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:25.443704 containerd[1720]: time="2025-02-13T15:39:25.443402086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:25.447194 containerd[1720]: time="2025-02-13T15:39:25.447105880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:25.448696 containerd[1720]: time="2025-02-13T15:39:25.440875922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:25.448696 containerd[1720]: time="2025-02-13T15:39:25.447641893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:25.448696 containerd[1720]: time="2025-02-13T15:39:25.447662194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:25.448696 containerd[1720]: time="2025-02-13T15:39:25.447743996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:25.479830 systemd[1]: Started cri-containerd-32f924a3c95c42cc041deed8e116164391e2a25c98bab4ca1604f552af7c17e5.scope - libcontainer container 32f924a3c95c42cc041deed8e116164391e2a25c98bab4ca1604f552af7c17e5. Feb 13 15:39:25.486389 systemd[1]: Started cri-containerd-95e4957d8a69227955dd57217ab9a4d92540a6930e13c3199e3cc30806c78188.scope - libcontainer container 95e4957d8a69227955dd57217ab9a4d92540a6930e13c3199e3cc30806c78188. Feb 13 15:39:25.489097 systemd[1]: Started cri-containerd-be94d6d697cf852bab9fe8b616b1f6309f95350670e94d5c3e0c422c52f44a0f.scope - libcontainer container be94d6d697cf852bab9fe8b616b1f6309f95350670e94d5c3e0c422c52f44a0f. Feb 13 15:39:25.579604 containerd[1720]: time="2025-02-13T15:39:25.576006935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-02a9d39241,Uid:7491754ca6c5ffb93dcb30c1ab5f6cfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"95e4957d8a69227955dd57217ab9a4d92540a6930e13c3199e3cc30806c78188\"" Feb 13 15:39:25.581194 containerd[1720]: time="2025-02-13T15:39:25.581037362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-02a9d39241,Uid:2c6a43a9d30ba7055f58753dee1eea2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"32f924a3c95c42cc041deed8e116164391e2a25c98bab4ca1604f552af7c17e5\"" Feb 13 15:39:25.586637 containerd[1720]: time="2025-02-13T15:39:25.586536401Z" level=info msg="CreateContainer within sandbox \"95e4957d8a69227955dd57217ab9a4d92540a6930e13c3199e3cc30806c78188\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:39:25.587656 containerd[1720]: time="2025-02-13T15:39:25.586906510Z" level=info msg="CreateContainer within sandbox \"32f924a3c95c42cc041deed8e116164391e2a25c98bab4ca1604f552af7c17e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:39:25.588494 containerd[1720]: time="2025-02-13T15:39:25.588398348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-02a9d39241,Uid:b61e65a100906e6b470fb3c12140bdd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"be94d6d697cf852bab9fe8b616b1f6309f95350670e94d5c3e0c422c52f44a0f\"" Feb 13 15:39:25.603157 containerd[1720]: time="2025-02-13T15:39:25.603115420Z" level=info msg="CreateContainer within sandbox \"be94d6d697cf852bab9fe8b616b1f6309f95350670e94d5c3e0c422c52f44a0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:39:25.672843 containerd[1720]: time="2025-02-13T15:39:25.672777979Z" level=info msg="CreateContainer within sandbox \"95e4957d8a69227955dd57217ab9a4d92540a6930e13c3199e3cc30806c78188\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"edf2235b5cf402fd19594eb66eda65b3b617930ca1006772638c46617fe3c737\"" Feb 13 15:39:25.673757 containerd[1720]: time="2025-02-13T15:39:25.673674901Z" level=info msg="StartContainer for \"edf2235b5cf402fd19594eb66eda65b3b617930ca1006772638c46617fe3c737\"" Feb 13 15:39:25.685201 containerd[1720]: time="2025-02-13T15:39:25.685153191Z" level=info msg="CreateContainer within sandbox \"be94d6d697cf852bab9fe8b616b1f6309f95350670e94d5c3e0c422c52f44a0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"03593e1a24287732fc9885f2c83910389a8dbe9d6323b9c6f028bcac2e35a7f4\"" Feb 13 15:39:25.690979 containerd[1720]: time="2025-02-13T15:39:25.688800783Z" level=info msg="StartContainer for \"03593e1a24287732fc9885f2c83910389a8dbe9d6323b9c6f028bcac2e35a7f4\"" Feb 13 15:39:25.691096 kubelet[2957]: E0213 15:39:25.690542 2957 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.20:6443: connect: connection refused Feb 13 15:39:25.692142 containerd[1720]: time="2025-02-13T15:39:25.692108767Z" level=info msg="CreateContainer within sandbox \"32f924a3c95c42cc041deed8e116164391e2a25c98bab4ca1604f552af7c17e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b6f3bf76f6e61ec1ae5ba3a1315f932476b1dc41c7387aefe2f2b50051d3ae3\"" Feb 13 15:39:25.693728 containerd[1720]: time="2025-02-13T15:39:25.693634706Z" level=info msg="StartContainer for \"8b6f3bf76f6e61ec1ae5ba3a1315f932476b1dc41c7387aefe2f2b50051d3ae3\"" Feb 13 15:39:25.736900 systemd[1]: Started cri-containerd-edf2235b5cf402fd19594eb66eda65b3b617930ca1006772638c46617fe3c737.scope - libcontainer container edf2235b5cf402fd19594eb66eda65b3b617930ca1006772638c46617fe3c737. Feb 13 15:39:25.756825 systemd[1]: Started cri-containerd-03593e1a24287732fc9885f2c83910389a8dbe9d6323b9c6f028bcac2e35a7f4.scope - libcontainer container 03593e1a24287732fc9885f2c83910389a8dbe9d6323b9c6f028bcac2e35a7f4. Feb 13 15:39:25.774778 systemd[1]: Started cri-containerd-8b6f3bf76f6e61ec1ae5ba3a1315f932476b1dc41c7387aefe2f2b50051d3ae3.scope - libcontainer container 8b6f3bf76f6e61ec1ae5ba3a1315f932476b1dc41c7387aefe2f2b50051d3ae3. Feb 13 15:39:25.832979 containerd[1720]: time="2025-02-13T15:39:25.832668817Z" level=info msg="StartContainer for \"edf2235b5cf402fd19594eb66eda65b3b617930ca1006772638c46617fe3c737\" returns successfully" Feb 13 15:39:25.850067 containerd[1720]: time="2025-02-13T15:39:25.850012555Z" level=info msg="StartContainer for \"03593e1a24287732fc9885f2c83910389a8dbe9d6323b9c6f028bcac2e35a7f4\" returns successfully" Feb 13 15:39:25.879357 containerd[1720]: time="2025-02-13T15:39:25.879298794Z" level=info msg="StartContainer for \"8b6f3bf76f6e61ec1ae5ba3a1315f932476b1dc41c7387aefe2f2b50051d3ae3\" returns successfully" Feb 13 15:39:26.730241 kubelet[2957]: I0213 15:39:26.730209 2957 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:27.963372 kubelet[2957]: E0213 15:39:27.963312 2957 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.1-a-02a9d39241\" not found" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:28.066204 kubelet[2957]: I0213 15:39:28.065952 2957 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:28.602729 kubelet[2957]: I0213 15:39:28.602670 2957 apiserver.go:52] "Watching apiserver" Feb 13 15:39:28.627397 kubelet[2957]: I0213 15:39:28.627338 2957 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:39:28.693341 kubelet[2957]: E0213 15:39:28.692671 2957 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:31.367878 systemd[1]: Reloading requested from client PID 3226 ('systemctl') (unit session-9.scope)... Feb 13 15:39:31.367895 systemd[1]: Reloading... Feb 13 15:39:31.503660 zram_generator::config[3266]: No configuration found. Feb 13 15:39:31.658946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:31.761389 systemd[1]: Reloading finished in 393 ms. Feb 13 15:39:31.806808 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:31.816308 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:31.816545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:31.823957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:31.922336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:31.929027 (kubelet)[3333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:31.987651 kubelet[3333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:31.987651 kubelet[3333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:31.987651 kubelet[3333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:31.989050 kubelet[3333]: I0213 15:39:31.988286 3333 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:31.997286 kubelet[3333]: I0213 15:39:31.997247 3333 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:39:31.997286 kubelet[3333]: I0213 15:39:31.997274 3333 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:31.997530 kubelet[3333]: I0213 15:39:31.997508 3333 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:39:31.999187 kubelet[3333]: I0213 15:39:31.999130 3333 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:39:32.002412 kubelet[3333]: I0213 15:39:32.002097 3333 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:32.009650 kubelet[3333]: I0213 15:39:32.009587 3333 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:32.009893 kubelet[3333]: I0213 15:39:32.009865 3333 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:32.010100 kubelet[3333]: I0213 15:39:32.010074 3333 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:39:32.010226 kubelet[3333]: I0213 15:39:32.010119 3333 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:32.010226 kubelet[3333]: I0213 15:39:32.010133 3333 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:39:32.010226 kubelet[3333]: I0213 15:39:32.010185 3333 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:32.010352 kubelet[3333]: I0213 15:39:32.010304 3333 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:39:32.010352 kubelet[3333]: I0213 15:39:32.010321 3333 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:32.010430 kubelet[3333]: I0213 15:39:32.010394 3333 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:39:32.010430 kubelet[3333]: I0213 15:39:32.010423 3333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:32.013174 kubelet[3333]: I0213 15:39:32.013044 3333 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:32.013534 kubelet[3333]: I0213 15:39:32.013319 3333 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:32.013995 kubelet[3333]: I0213 15:39:32.013837 3333 server.go:1256] "Started kubelet" Feb 13 15:39:32.018992 kubelet[3333]: I0213 15:39:32.018967 3333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:32.031270 kubelet[3333]: I0213 15:39:32.031237 3333 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:32.034069 kubelet[3333]: I0213 15:39:32.032349 3333 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:39:32.044019 kubelet[3333]: I0213 15:39:32.043992 3333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:32.045217 kubelet[3333]: I0213 15:39:32.044332 3333 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:32.047979 kubelet[3333]: I0213 15:39:32.047959 3333 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:39:32.048550 kubelet[3333]: I0213 15:39:32.048530 3333 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:39:32.050021 kubelet[3333]: I0213 15:39:32.050003 3333 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:39:32.054842 kubelet[3333]: I0213 15:39:32.054813 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:32.060561 kubelet[3333]: I0213 15:39:32.060299 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:32.060561 kubelet[3333]: I0213 15:39:32.060348 3333 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:39:32.060561 kubelet[3333]: I0213 15:39:32.060370 3333 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:39:32.060561 kubelet[3333]: E0213 15:39:32.060444 3333 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:32.061192 kubelet[3333]: I0213 15:39:32.061162 3333 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:32.061399 kubelet[3333]: I0213 15:39:32.061376 3333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:32.067022 kubelet[3333]: I0213 15:39:32.067006 3333 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:32.081741 kubelet[3333]: E0213 15:39:32.081716 3333 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:32.117775 kubelet[3333]: I0213 15:39:32.117706 3333 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:39:32.117775 kubelet[3333]: I0213 15:39:32.117771 3333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:32.117988 kubelet[3333]: I0213 15:39:32.117796 3333 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:32.117988 kubelet[3333]: I0213 15:39:32.117975 3333 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:39:32.118075 kubelet[3333]: I0213 15:39:32.118000 3333 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:39:32.118075 kubelet[3333]: I0213 15:39:32.118011 3333 policy_none.go:49] "None policy: Start" Feb 13 15:39:32.118708 kubelet[3333]: I0213 15:39:32.118645 3333 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:39:32.118708 kubelet[3333]: I0213 15:39:32.118681 3333 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:32.118913 kubelet[3333]: I0213 15:39:32.118898 3333 state_mem.go:75] "Updated machine memory state" Feb 13 15:39:32.123447 kubelet[3333]: I0213 15:39:32.123422 3333 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:32.124107 kubelet[3333]: I0213 15:39:32.123702 3333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:32.151043 kubelet[3333]: I0213 15:39:32.151018 3333 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.160737 kubelet[3333]: I0213 15:39:32.160597 3333 topology_manager.go:215] "Topology Admit Handler" podUID="2c6a43a9d30ba7055f58753dee1eea2f" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.161070 kubelet[3333]: I0213 15:39:32.160949 3333 topology_manager.go:215] "Topology Admit Handler" podUID="b61e65a100906e6b470fb3c12140bdd8" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.161070 kubelet[3333]: I0213 15:39:32.161039 3333 topology_manager.go:215] "Topology Admit Handler" podUID="7491754ca6c5ffb93dcb30c1ab5f6cfd" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.164702 kubelet[3333]: I0213 15:39:32.164531 3333 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.164702 kubelet[3333]: I0213 15:39:32.164637 3333 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.169879 kubelet[3333]: W0213 15:39:32.169852 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:39:32.173574 kubelet[3333]: W0213 15:39:32.173209 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:39:32.173574 kubelet[3333]: W0213 15:39:32.173262 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:39:32.251201 kubelet[3333]: I0213 15:39:32.251119 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251201 kubelet[3333]: I0213 15:39:32.251167 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251201 kubelet[3333]: I0213 15:39:32.251200 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251642 kubelet[3333]: I0213 15:39:32.251232 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7491754ca6c5ffb93dcb30c1ab5f6cfd-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-02a9d39241\" (UID: \"7491754ca6c5ffb93dcb30c1ab5f6cfd\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251642 kubelet[3333]: I0213 15:39:32.251258 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c6a43a9d30ba7055f58753dee1eea2f-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" (UID: \"2c6a43a9d30ba7055f58753dee1eea2f\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251642 kubelet[3333]: I0213 15:39:32.251283 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c6a43a9d30ba7055f58753dee1eea2f-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" (UID: \"2c6a43a9d30ba7055f58753dee1eea2f\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251642 kubelet[3333]: I0213 15:39:32.251309 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251642 kubelet[3333]: I0213 15:39:32.251355 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c6a43a9d30ba7055f58753dee1eea2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" (UID: \"2c6a43a9d30ba7055f58753dee1eea2f\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.251838 kubelet[3333]: I0213 15:39:32.251392 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b61e65a100906e6b470fb3c12140bdd8-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-02a9d39241\" (UID: \"b61e65a100906e6b470fb3c12140bdd8\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:32.456933 sudo[3362]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:39:32.457307 sudo[3362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:39:32.985412 sudo[3362]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:33.011440 kubelet[3333]: I0213 15:39:33.011141 3333 apiserver.go:52] "Watching apiserver" Feb 13 15:39:33.049996 kubelet[3333]: I0213 15:39:33.049941 3333 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:39:33.108491 kubelet[3333]: W0213 15:39:33.108458 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:39:33.108689 kubelet[3333]: E0213 15:39:33.108541 3333 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.1-a-02a9d39241\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" Feb 13 15:39:33.142581 kubelet[3333]: I0213 15:39:33.142539 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.1-a-02a9d39241" podStartSLOduration=1.142466407 podStartE2EDuration="1.142466407s" podCreationTimestamp="2025-02-13 15:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:33.130971203 +0000 UTC m=+1.196463539" watchObservedRunningTime="2025-02-13 15:39:33.142466407 +0000 UTC m=+1.207958743" Feb 13 15:39:33.154447 kubelet[3333]: I0213 15:39:33.154402 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.1-a-02a9d39241" podStartSLOduration=1.154353922 podStartE2EDuration="1.154353922s" podCreationTimestamp="2025-02-13 15:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:33.143119225 +0000 UTC m=+1.208611561" watchObservedRunningTime="2025-02-13 15:39:33.154353922 +0000 UTC m=+1.219846358" Feb 13 15:39:33.173635 kubelet[3333]: I0213 15:39:33.172140 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.1-a-02a9d39241" podStartSLOduration=1.17207499 podStartE2EDuration="1.17207499s" podCreationTimestamp="2025-02-13 15:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:33.15466333 +0000 UTC m=+1.220155666" watchObservedRunningTime="2025-02-13 15:39:33.17207499 +0000 UTC m=+1.237567426" Feb 13 15:39:34.493650 sudo[2283]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:34.594073 sshd[2282]: Connection closed by 10.200.16.10 port 56106 Feb 13 15:39:34.595057 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:34.600273 systemd[1]: sshd@6-10.200.8.20:22-10.200.16.10:56106.service: Deactivated successfully. Feb 13 15:39:34.602193 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:39:34.602414 systemd[1]: session-9.scope: Consumed 4.703s CPU time, 184.2M memory peak, 0B memory swap peak. Feb 13 15:39:34.603085 systemd-logind[1692]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:39:34.604319 systemd-logind[1692]: Removed session 9. Feb 13 15:39:44.573089 kubelet[3333]: I0213 15:39:44.572988 3333 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:39:44.574449 containerd[1720]: time="2025-02-13T15:39:44.573737826Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:39:44.575841 kubelet[3333]: I0213 15:39:44.573996 3333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:39:45.450635 kubelet[3333]: I0213 15:39:45.447867 3333 topology_manager.go:215] "Topology Admit Handler" podUID="b239b778-29de-42ea-9197-936c009743d9" podNamespace="kube-system" podName="kube-proxy-gvf7b" Feb 13 15:39:45.456637 kubelet[3333]: I0213 15:39:45.456066 3333 topology_manager.go:215] "Topology Admit Handler" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" podNamespace="kube-system" podName="cilium-xskkf" Feb 13 15:39:45.460412 systemd[1]: Created slice kubepods-besteffort-podb239b778_29de_42ea_9197_936c009743d9.slice - libcontainer container kubepods-besteffort-podb239b778_29de_42ea_9197_936c009743d9.slice. Feb 13 15:39:45.476312 systemd[1]: Created slice kubepods-burstable-pod338946dd_71ce_4140_b262_0a1d56f0effe.slice - libcontainer container kubepods-burstable-pod338946dd_71ce_4140_b262_0a1d56f0effe.slice. Feb 13 15:39:45.536267 kubelet[3333]: I0213 15:39:45.534883 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-etc-cni-netd\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536267 kubelet[3333]: I0213 15:39:45.534965 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-kernel\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536267 kubelet[3333]: I0213 15:39:45.535013 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cni-path\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536267 kubelet[3333]: I0213 15:39:45.535068 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b239b778-29de-42ea-9197-936c009743d9-kube-proxy\") pod \"kube-proxy-gvf7b\" (UID: \"b239b778-29de-42ea-9197-936c009743d9\") " pod="kube-system/kube-proxy-gvf7b" Feb 13 15:39:45.536267 kubelet[3333]: I0213 15:39:45.535099 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-cgroup\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536267 kubelet[3333]: I0213 15:39:45.535129 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-config-path\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536688 kubelet[3333]: I0213 15:39:45.535158 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-hubble-tls\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536688 kubelet[3333]: I0213 15:39:45.535185 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bvx\" (UniqueName: \"kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-kube-api-access-f6bvx\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536688 kubelet[3333]: I0213 15:39:45.535216 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh7ld\" (UniqueName: \"kubernetes.io/projected/b239b778-29de-42ea-9197-936c009743d9-kube-api-access-gh7ld\") pod \"kube-proxy-gvf7b\" (UID: \"b239b778-29de-42ea-9197-936c009743d9\") " pod="kube-system/kube-proxy-gvf7b" Feb 13 15:39:45.536688 kubelet[3333]: I0213 15:39:45.535244 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-bpf-maps\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536688 kubelet[3333]: I0213 15:39:45.535271 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-lib-modules\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536688 kubelet[3333]: I0213 15:39:45.535305 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b239b778-29de-42ea-9197-936c009743d9-xtables-lock\") pod \"kube-proxy-gvf7b\" (UID: \"b239b778-29de-42ea-9197-936c009743d9\") " pod="kube-system/kube-proxy-gvf7b" Feb 13 15:39:45.536933 kubelet[3333]: I0213 15:39:45.535331 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b239b778-29de-42ea-9197-936c009743d9-lib-modules\") pod \"kube-proxy-gvf7b\" (UID: \"b239b778-29de-42ea-9197-936c009743d9\") " pod="kube-system/kube-proxy-gvf7b" Feb 13 15:39:45.536933 kubelet[3333]: I0213 15:39:45.535363 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-hostproc\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536933 kubelet[3333]: I0213 15:39:45.535404 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-run\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536933 kubelet[3333]: I0213 15:39:45.535431 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-xtables-lock\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536933 kubelet[3333]: I0213 15:39:45.535459 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-net\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.536933 kubelet[3333]: I0213 15:39:45.535491 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/338946dd-71ce-4140-b262-0a1d56f0effe-clustermesh-secrets\") pod \"cilium-xskkf\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " pod="kube-system/cilium-xskkf" Feb 13 15:39:45.695578 kubelet[3333]: I0213 15:39:45.692021 3333 topology_manager.go:215] "Topology Admit Handler" podUID="bba2f478-24b9-4098-acb6-17f09975c8dd" podNamespace="kube-system" podName="cilium-operator-5cc964979-x8fz2" Feb 13 15:39:45.706005 systemd[1]: Created slice kubepods-besteffort-podbba2f478_24b9_4098_acb6_17f09975c8dd.slice - libcontainer container kubepods-besteffort-podbba2f478_24b9_4098_acb6_17f09975c8dd.slice. Feb 13 15:39:45.737540 kubelet[3333]: I0213 15:39:45.737277 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss47n\" (UniqueName: \"kubernetes.io/projected/bba2f478-24b9-4098-acb6-17f09975c8dd-kube-api-access-ss47n\") pod \"cilium-operator-5cc964979-x8fz2\" (UID: \"bba2f478-24b9-4098-acb6-17f09975c8dd\") " pod="kube-system/cilium-operator-5cc964979-x8fz2" Feb 13 15:39:45.737540 kubelet[3333]: I0213 15:39:45.737387 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bba2f478-24b9-4098-acb6-17f09975c8dd-cilium-config-path\") pod \"cilium-operator-5cc964979-x8fz2\" (UID: \"bba2f478-24b9-4098-acb6-17f09975c8dd\") " pod="kube-system/cilium-operator-5cc964979-x8fz2" Feb 13 15:39:45.773083 containerd[1720]: time="2025-02-13T15:39:45.772974550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvf7b,Uid:b239b778-29de-42ea-9197-936c009743d9,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:45.780682 containerd[1720]: time="2025-02-13T15:39:45.780156353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xskkf,Uid:338946dd-71ce-4140-b262-0a1d56f0effe,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:45.872833 containerd[1720]: time="2025-02-13T15:39:45.871202429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:45.872833 containerd[1720]: time="2025-02-13T15:39:45.872322860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:45.873681 containerd[1720]: time="2025-02-13T15:39:45.872341661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:45.873681 containerd[1720]: time="2025-02-13T15:39:45.873097782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:45.873941 containerd[1720]: time="2025-02-13T15:39:45.873646098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:45.873941 containerd[1720]: time="2025-02-13T15:39:45.873705099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:45.873941 containerd[1720]: time="2025-02-13T15:39:45.873721700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:45.873941 containerd[1720]: time="2025-02-13T15:39:45.873794102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:45.898800 systemd[1]: Started cri-containerd-3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1.scope - libcontainer container 3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1. Feb 13 15:39:45.901577 systemd[1]: Started cri-containerd-7ef94b6bc6e36a2abec687bdcbc51f397cc9b5c7adca534cb02a74b11ba24700.scope - libcontainer container 7ef94b6bc6e36a2abec687bdcbc51f397cc9b5c7adca534cb02a74b11ba24700. Feb 13 15:39:45.935523 containerd[1720]: time="2025-02-13T15:39:45.934940532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xskkf,Uid:338946dd-71ce-4140-b262-0a1d56f0effe,Namespace:kube-system,Attempt:0,} returns sandbox id \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\"" Feb 13 15:39:45.938589 containerd[1720]: time="2025-02-13T15:39:45.938552334Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:39:45.941756 containerd[1720]: time="2025-02-13T15:39:45.941665022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvf7b,Uid:b239b778-29de-42ea-9197-936c009743d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ef94b6bc6e36a2abec687bdcbc51f397cc9b5c7adca534cb02a74b11ba24700\"" Feb 13 15:39:45.945522 containerd[1720]: time="2025-02-13T15:39:45.945484930Z" level=info msg="CreateContainer within sandbox \"7ef94b6bc6e36a2abec687bdcbc51f397cc9b5c7adca534cb02a74b11ba24700\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:39:46.001874 containerd[1720]: time="2025-02-13T15:39:46.001709220Z" level=info msg="CreateContainer within sandbox \"7ef94b6bc6e36a2abec687bdcbc51f397cc9b5c7adca534cb02a74b11ba24700\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"011cd6a757f4ef5cd8e9ce4ed54811863e0d32a2d355020a30957ec4383b0bed\"" Feb 13 15:39:46.003190 containerd[1720]: time="2025-02-13T15:39:46.003154461Z" level=info msg="StartContainer for \"011cd6a757f4ef5cd8e9ce4ed54811863e0d32a2d355020a30957ec4383b0bed\"" Feb 13 15:39:46.013410 containerd[1720]: time="2025-02-13T15:39:46.013368550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-x8fz2,Uid:bba2f478-24b9-4098-acb6-17f09975c8dd,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:46.033807 systemd[1]: Started cri-containerd-011cd6a757f4ef5cd8e9ce4ed54811863e0d32a2d355020a30957ec4383b0bed.scope - libcontainer container 011cd6a757f4ef5cd8e9ce4ed54811863e0d32a2d355020a30957ec4383b0bed. Feb 13 15:39:46.072311 containerd[1720]: time="2025-02-13T15:39:46.072258016Z" level=info msg="StartContainer for \"011cd6a757f4ef5cd8e9ce4ed54811863e0d32a2d355020a30957ec4383b0bed\" returns successfully" Feb 13 15:39:46.094681 containerd[1720]: time="2025-02-13T15:39:46.092275482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:46.094681 containerd[1720]: time="2025-02-13T15:39:46.092951901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:46.094681 containerd[1720]: time="2025-02-13T15:39:46.092969602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:46.094681 containerd[1720]: time="2025-02-13T15:39:46.093067205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:46.117930 systemd[1]: Started cri-containerd-4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795.scope - libcontainer container 4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795. Feb 13 15:39:46.143942 kubelet[3333]: I0213 15:39:46.143903 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gvf7b" podStartSLOduration=1.143751338 podStartE2EDuration="1.143751338s" podCreationTimestamp="2025-02-13 15:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:46.143002817 +0000 UTC m=+14.208495153" watchObservedRunningTime="2025-02-13 15:39:46.143751338 +0000 UTC m=+14.209243674" Feb 13 15:39:46.186838 containerd[1720]: time="2025-02-13T15:39:46.186775755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-x8fz2,Uid:bba2f478-24b9-4098-acb6-17f09975c8dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795\"" Feb 13 15:39:52.872877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040410981.mount: Deactivated successfully. Feb 13 15:39:55.057083 containerd[1720]: time="2025-02-13T15:39:55.057012156Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:55.062276 containerd[1720]: time="2025-02-13T15:39:55.062198890Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:39:55.072117 containerd[1720]: time="2025-02-13T15:39:55.072046544Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:55.074403 containerd[1720]: time="2025-02-13T15:39:55.074364604Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.135621565s" Feb 13 15:39:55.074403 containerd[1720]: time="2025-02-13T15:39:55.074405905Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:39:55.075963 containerd[1720]: time="2025-02-13T15:39:55.075893143Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:39:55.076992 containerd[1720]: time="2025-02-13T15:39:55.076853268Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:39:55.141422 containerd[1720]: time="2025-02-13T15:39:55.141368532Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\"" Feb 13 15:39:55.142173 containerd[1720]: time="2025-02-13T15:39:55.142109751Z" level=info msg="StartContainer for \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\"" Feb 13 15:39:55.177767 systemd[1]: Started cri-containerd-4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58.scope - libcontainer container 4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58. Feb 13 15:39:55.205445 containerd[1720]: time="2025-02-13T15:39:55.205238480Z" level=info msg="StartContainer for \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\" returns successfully" Feb 13 15:39:55.215158 systemd[1]: cri-containerd-4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58.scope: Deactivated successfully. Feb 13 15:39:56.126267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58-rootfs.mount: Deactivated successfully. Feb 13 15:39:59.096498 containerd[1720]: time="2025-02-13T15:39:59.096396426Z" level=info msg="shim disconnected" id=4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58 namespace=k8s.io Feb 13 15:39:59.096498 containerd[1720]: time="2025-02-13T15:39:59.096483528Z" level=warning msg="cleaning up after shim disconnected" id=4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58 namespace=k8s.io Feb 13 15:39:59.096498 containerd[1720]: time="2025-02-13T15:39:59.096497929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:59.177976 containerd[1720]: time="2025-02-13T15:39:59.176992868Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:39:59.228553 containerd[1720]: time="2025-02-13T15:39:59.228496772Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\"" Feb 13 15:39:59.230052 containerd[1720]: time="2025-02-13T15:39:59.230016911Z" level=info msg="StartContainer for \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\"" Feb 13 15:39:59.264782 systemd[1]: Started cri-containerd-a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4.scope - libcontainer container a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4. Feb 13 15:39:59.297538 containerd[1720]: time="2025-02-13T15:39:59.297482019Z" level=info msg="StartContainer for \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\" returns successfully" Feb 13 15:39:59.307371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:39:59.307761 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:59.307879 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:39:59.315930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:39:59.316233 systemd[1]: cri-containerd-a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4.scope: Deactivated successfully. Feb 13 15:39:59.337315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4-rootfs.mount: Deactivated successfully. Feb 13 15:39:59.346596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:59.354281 containerd[1720]: time="2025-02-13T15:39:59.354215256Z" level=info msg="shim disconnected" id=a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4 namespace=k8s.io Feb 13 15:39:59.354281 containerd[1720]: time="2025-02-13T15:39:59.354277958Z" level=warning msg="cleaning up after shim disconnected" id=a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4 namespace=k8s.io Feb 13 15:39:59.354483 containerd[1720]: time="2025-02-13T15:39:59.354289358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:00.185145 containerd[1720]: time="2025-02-13T15:40:00.184962597Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:40:00.289131 containerd[1720]: time="2025-02-13T15:40:00.288879329Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\"" Feb 13 15:40:00.290392 containerd[1720]: time="2025-02-13T15:40:00.289987258Z" level=info msg="StartContainer for \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\"" Feb 13 15:40:00.345904 systemd[1]: Started cri-containerd-35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388.scope - libcontainer container 35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388. Feb 13 15:40:00.394213 containerd[1720]: time="2025-02-13T15:40:00.393732085Z" level=info msg="StartContainer for \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\" returns successfully" Feb 13 15:40:00.395404 systemd[1]: cri-containerd-35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388.scope: Deactivated successfully. Feb 13 15:40:00.795292 containerd[1720]: time="2025-02-13T15:40:00.795011849Z" level=info msg="shim disconnected" id=35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388 namespace=k8s.io Feb 13 15:40:00.795292 containerd[1720]: time="2025-02-13T15:40:00.795288756Z" level=warning msg="cleaning up after shim disconnected" id=35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388 namespace=k8s.io Feb 13 15:40:00.795292 containerd[1720]: time="2025-02-13T15:40:00.795304256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:01.015349 containerd[1720]: time="2025-02-13T15:40:01.014466407Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:40:01.015349 containerd[1720]: time="2025-02-13T15:40:01.014803416Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:01.015974 containerd[1720]: time="2025-02-13T15:40:01.015933044Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:01.017458 containerd[1720]: time="2025-02-13T15:40:01.017302179Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.941362735s" Feb 13 15:40:01.017458 containerd[1720]: time="2025-02-13T15:40:01.017342080Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:40:01.020709 containerd[1720]: time="2025-02-13T15:40:01.020376557Z" level=info msg="CreateContainer within sandbox \"4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:40:01.056770 containerd[1720]: time="2025-02-13T15:40:01.056586474Z" level=info msg="CreateContainer within sandbox \"4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\"" Feb 13 15:40:01.058760 containerd[1720]: time="2025-02-13T15:40:01.057713403Z" level=info msg="StartContainer for \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\"" Feb 13 15:40:01.084806 systemd[1]: Started cri-containerd-f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8.scope - libcontainer container f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8. Feb 13 15:40:01.111718 containerd[1720]: time="2025-02-13T15:40:01.111655869Z" level=info msg="StartContainer for \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\" returns successfully" Feb 13 15:40:01.192181 containerd[1720]: time="2025-02-13T15:40:01.191678896Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:40:01.221319 systemd[1]: run-containerd-runc-k8s.io-35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388-runc.KliYCh.mount: Deactivated successfully. Feb 13 15:40:01.221447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388-rootfs.mount: Deactivated successfully. Feb 13 15:40:01.252573 containerd[1720]: time="2025-02-13T15:40:01.252512136Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\"" Feb 13 15:40:01.253347 containerd[1720]: time="2025-02-13T15:40:01.253312057Z" level=info msg="StartContainer for \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\"" Feb 13 15:40:01.314234 systemd[1]: run-containerd-runc-k8s.io-b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92-runc.GBbljh.mount: Deactivated successfully. Feb 13 15:40:01.328808 systemd[1]: Started cri-containerd-b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92.scope - libcontainer container b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92. Feb 13 15:40:01.377034 systemd[1]: cri-containerd-b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92.scope: Deactivated successfully. Feb 13 15:40:01.379299 containerd[1720]: time="2025-02-13T15:40:01.379043141Z" level=info msg="StartContainer for \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\" returns successfully" Feb 13 15:40:01.602440 containerd[1720]: time="2025-02-13T15:40:01.601762482Z" level=info msg="shim disconnected" id=b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92 namespace=k8s.io Feb 13 15:40:01.603192 containerd[1720]: time="2025-02-13T15:40:01.602819409Z" level=warning msg="cleaning up after shim disconnected" id=b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92 namespace=k8s.io Feb 13 15:40:01.603192 containerd[1720]: time="2025-02-13T15:40:01.602849810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:01.629069 containerd[1720]: time="2025-02-13T15:40:01.628985472Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:40:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:40:02.207146 containerd[1720]: time="2025-02-13T15:40:02.207095214Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:40:02.217431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92-rootfs.mount: Deactivated successfully. Feb 13 15:40:02.230018 kubelet[3333]: I0213 15:40:02.229972 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-x8fz2" podStartSLOduration=2.402022644 podStartE2EDuration="17.229917192s" podCreationTimestamp="2025-02-13 15:39:45 +0000 UTC" firstStartedPulling="2025-02-13 15:39:46.190059548 +0000 UTC m=+14.255551884" lastFinishedPulling="2025-02-13 15:40:01.017954096 +0000 UTC m=+29.083446432" observedRunningTime="2025-02-13 15:40:01.275120209 +0000 UTC m=+29.340612645" watchObservedRunningTime="2025-02-13 15:40:02.229917192 +0000 UTC m=+30.295409528" Feb 13 15:40:02.261739 containerd[1720]: time="2025-02-13T15:40:02.261693997Z" level=info msg="CreateContainer within sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\"" Feb 13 15:40:02.263053 containerd[1720]: time="2025-02-13T15:40:02.262281712Z" level=info msg="StartContainer for \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\"" Feb 13 15:40:02.298781 systemd[1]: Started cri-containerd-522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959.scope - libcontainer container 522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959. Feb 13 15:40:02.330074 containerd[1720]: time="2025-02-13T15:40:02.329988427Z" level=info msg="StartContainer for \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\" returns successfully" Feb 13 15:40:02.492893 kubelet[3333]: I0213 15:40:02.492770 3333 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:40:02.531318 kubelet[3333]: I0213 15:40:02.531274 3333 topology_manager.go:215] "Topology Admit Handler" podUID="a11bad78-1f79-4d6e-a49a-3cea00331889" podNamespace="kube-system" podName="coredns-76f75df574-9b9tl" Feb 13 15:40:02.544775 systemd[1]: Created slice kubepods-burstable-poda11bad78_1f79_4d6e_a49a_3cea00331889.slice - libcontainer container kubepods-burstable-poda11bad78_1f79_4d6e_a49a_3cea00331889.slice. Feb 13 15:40:02.549884 kubelet[3333]: I0213 15:40:02.549853 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g99f\" (UniqueName: \"kubernetes.io/projected/a11bad78-1f79-4d6e-a49a-3cea00331889-kube-api-access-8g99f\") pod \"coredns-76f75df574-9b9tl\" (UID: \"a11bad78-1f79-4d6e-a49a-3cea00331889\") " pod="kube-system/coredns-76f75df574-9b9tl" Feb 13 15:40:02.550027 kubelet[3333]: I0213 15:40:02.549916 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11bad78-1f79-4d6e-a49a-3cea00331889-config-volume\") pod \"coredns-76f75df574-9b9tl\" (UID: \"a11bad78-1f79-4d6e-a49a-3cea00331889\") " pod="kube-system/coredns-76f75df574-9b9tl" Feb 13 15:40:02.552532 kubelet[3333]: I0213 15:40:02.552502 3333 topology_manager.go:215] "Topology Admit Handler" podUID="95ef3359-1989-46ef-82f2-6a1795a2ef0d" podNamespace="kube-system" podName="coredns-76f75df574-2dx27" Feb 13 15:40:02.563930 systemd[1]: Created slice kubepods-burstable-pod95ef3359_1989_46ef_82f2_6a1795a2ef0d.slice - libcontainer container kubepods-burstable-pod95ef3359_1989_46ef_82f2_6a1795a2ef0d.slice. Feb 13 15:40:02.651143 kubelet[3333]: I0213 15:40:02.651093 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95ef3359-1989-46ef-82f2-6a1795a2ef0d-config-volume\") pod \"coredns-76f75df574-2dx27\" (UID: \"95ef3359-1989-46ef-82f2-6a1795a2ef0d\") " pod="kube-system/coredns-76f75df574-2dx27" Feb 13 15:40:02.652062 kubelet[3333]: I0213 15:40:02.651994 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwvn\" (UniqueName: \"kubernetes.io/projected/95ef3359-1989-46ef-82f2-6a1795a2ef0d-kube-api-access-rkwvn\") pod \"coredns-76f75df574-2dx27\" (UID: \"95ef3359-1989-46ef-82f2-6a1795a2ef0d\") " pod="kube-system/coredns-76f75df574-2dx27" Feb 13 15:40:02.854134 containerd[1720]: time="2025-02-13T15:40:02.853098176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9b9tl,Uid:a11bad78-1f79-4d6e-a49a-3cea00331889,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:02.869678 containerd[1720]: time="2025-02-13T15:40:02.869091781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2dx27,Uid:95ef3359-1989-46ef-82f2-6a1795a2ef0d,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:05.308146 systemd-networkd[1324]: cilium_host: Link UP Feb 13 15:40:05.310726 systemd-networkd[1324]: cilium_net: Link UP Feb 13 15:40:05.310733 systemd-networkd[1324]: cilium_net: Gained carrier Feb 13 15:40:05.312087 systemd-networkd[1324]: cilium_host: Gained carrier Feb 13 15:40:05.520790 systemd-networkd[1324]: cilium_net: Gained IPv6LL Feb 13 15:40:05.566153 systemd-networkd[1324]: cilium_vxlan: Link UP Feb 13 15:40:05.566165 systemd-networkd[1324]: cilium_vxlan: Gained carrier Feb 13 15:40:05.697007 systemd-networkd[1324]: cilium_host: Gained IPv6LL Feb 13 15:40:05.865716 kernel: NET: Registered PF_ALG protocol family Feb 13 15:40:06.616010 systemd-networkd[1324]: lxc_health: Link UP Feb 13 15:40:06.635787 systemd-networkd[1324]: lxc_health: Gained carrier Feb 13 15:40:06.784762 systemd-networkd[1324]: cilium_vxlan: Gained IPv6LL Feb 13 15:40:06.948126 systemd-networkd[1324]: lxc96b013e256fe: Link UP Feb 13 15:40:06.954653 kernel: eth0: renamed from tmp83d71 Feb 13 15:40:06.961336 systemd-networkd[1324]: lxc96b013e256fe: Gained carrier Feb 13 15:40:06.995841 systemd-networkd[1324]: lxcb07eb61c5077: Link UP Feb 13 15:40:07.014131 kernel: eth0: renamed from tmp36f33 Feb 13 15:40:07.019518 systemd-networkd[1324]: lxcb07eb61c5077: Gained carrier Feb 13 15:40:07.811416 kubelet[3333]: I0213 15:40:07.811370 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xskkf" podStartSLOduration=13.673928297 podStartE2EDuration="22.811316011s" podCreationTimestamp="2025-02-13 15:39:45 +0000 UTC" firstStartedPulling="2025-02-13 15:39:45.937469303 +0000 UTC m=+14.002961639" lastFinishedPulling="2025-02-13 15:39:55.074856917 +0000 UTC m=+23.140349353" observedRunningTime="2025-02-13 15:40:03.236667491 +0000 UTC m=+31.302159827" watchObservedRunningTime="2025-02-13 15:40:07.811316011 +0000 UTC m=+35.876808347" Feb 13 15:40:08.000842 systemd-networkd[1324]: lxc96b013e256fe: Gained IPv6LL Feb 13 15:40:08.640785 systemd-networkd[1324]: lxc_health: Gained IPv6LL Feb 13 15:40:08.704908 systemd-networkd[1324]: lxcb07eb61c5077: Gained IPv6LL Feb 13 15:40:10.896886 containerd[1720]: time="2025-02-13T15:40:10.896529196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:10.896886 containerd[1720]: time="2025-02-13T15:40:10.896599697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:10.896886 containerd[1720]: time="2025-02-13T15:40:10.896651499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:10.896886 containerd[1720]: time="2025-02-13T15:40:10.896745001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:10.898768 containerd[1720]: time="2025-02-13T15:40:10.897750627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:10.898768 containerd[1720]: time="2025-02-13T15:40:10.897802228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:10.898768 containerd[1720]: time="2025-02-13T15:40:10.897816229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:10.900639 containerd[1720]: time="2025-02-13T15:40:10.899168863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:10.954812 systemd[1]: Started cri-containerd-36f3333335e5b08dd5f83f443cda7936e42bf29c69ea7dd5b9048f1f67661d57.scope - libcontainer container 36f3333335e5b08dd5f83f443cda7936e42bf29c69ea7dd5b9048f1f67661d57. Feb 13 15:40:10.957658 systemd[1]: Started cri-containerd-83d71e8c0d29c9223df8b721c55dc8c90aa69f2674ecf77506b1931d2040b973.scope - libcontainer container 83d71e8c0d29c9223df8b721c55dc8c90aa69f2674ecf77506b1931d2040b973. Feb 13 15:40:11.052371 containerd[1720]: time="2025-02-13T15:40:11.052320694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2dx27,Uid:95ef3359-1989-46ef-82f2-6a1795a2ef0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"36f3333335e5b08dd5f83f443cda7936e42bf29c69ea7dd5b9048f1f67661d57\"" Feb 13 15:40:11.058126 containerd[1720]: time="2025-02-13T15:40:11.057969239Z" level=info msg="CreateContainer within sandbox \"36f3333335e5b08dd5f83f443cda7936e42bf29c69ea7dd5b9048f1f67661d57\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:11.072269 containerd[1720]: time="2025-02-13T15:40:11.071830595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9b9tl,Uid:a11bad78-1f79-4d6e-a49a-3cea00331889,Namespace:kube-system,Attempt:0,} returns sandbox id \"83d71e8c0d29c9223df8b721c55dc8c90aa69f2674ecf77506b1931d2040b973\"" Feb 13 15:40:11.077437 containerd[1720]: time="2025-02-13T15:40:11.076576817Z" level=info msg="CreateContainer within sandbox \"83d71e8c0d29c9223df8b721c55dc8c90aa69f2674ecf77506b1931d2040b973\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:11.130361 containerd[1720]: time="2025-02-13T15:40:11.130304596Z" level=info msg="CreateContainer within sandbox \"83d71e8c0d29c9223df8b721c55dc8c90aa69f2674ecf77506b1931d2040b973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b0a2c3ef41d1995e224d886a5b00857fe3375ad6eb73fc501a8d42954522bb7\"" Feb 13 15:40:11.131099 containerd[1720]: time="2025-02-13T15:40:11.131030214Z" level=info msg="StartContainer for \"0b0a2c3ef41d1995e224d886a5b00857fe3375ad6eb73fc501a8d42954522bb7\"" Feb 13 15:40:11.132823 containerd[1720]: time="2025-02-13T15:40:11.132553853Z" level=info msg="CreateContainer within sandbox \"36f3333335e5b08dd5f83f443cda7936e42bf29c69ea7dd5b9048f1f67661d57\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4058bde526a3dda2f692471c765d6364d16879504c1634a6f22d8ae4587e5b8f\"" Feb 13 15:40:11.134658 containerd[1720]: time="2025-02-13T15:40:11.133324273Z" level=info msg="StartContainer for \"4058bde526a3dda2f692471c765d6364d16879504c1634a6f22d8ae4587e5b8f\"" Feb 13 15:40:11.167258 systemd[1]: Started cri-containerd-0b0a2c3ef41d1995e224d886a5b00857fe3375ad6eb73fc501a8d42954522bb7.scope - libcontainer container 0b0a2c3ef41d1995e224d886a5b00857fe3375ad6eb73fc501a8d42954522bb7. Feb 13 15:40:11.176844 systemd[1]: Started cri-containerd-4058bde526a3dda2f692471c765d6364d16879504c1634a6f22d8ae4587e5b8f.scope - libcontainer container 4058bde526a3dda2f692471c765d6364d16879504c1634a6f22d8ae4587e5b8f. Feb 13 15:40:11.216302 containerd[1720]: time="2025-02-13T15:40:11.216257002Z" level=info msg="StartContainer for \"0b0a2c3ef41d1995e224d886a5b00857fe3375ad6eb73fc501a8d42954522bb7\" returns successfully" Feb 13 15:40:11.224706 containerd[1720]: time="2025-02-13T15:40:11.224652417Z" level=info msg="StartContainer for \"4058bde526a3dda2f692471c765d6364d16879504c1634a6f22d8ae4587e5b8f\" returns successfully" Feb 13 15:40:11.258991 kubelet[3333]: I0213 15:40:11.258948 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2dx27" podStartSLOduration=26.258895396 podStartE2EDuration="26.258895396s" podCreationTimestamp="2025-02-13 15:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:11.25786637 +0000 UTC m=+39.323358706" watchObservedRunningTime="2025-02-13 15:40:11.258895396 +0000 UTC m=+39.324387832" Feb 13 15:40:11.288155 kubelet[3333]: I0213 15:40:11.288108 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9b9tl" podStartSLOduration=26.288052744 podStartE2EDuration="26.288052744s" podCreationTimestamp="2025-02-13 15:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:11.287173522 +0000 UTC m=+39.352665958" watchObservedRunningTime="2025-02-13 15:40:11.288052744 +0000 UTC m=+39.353545080" Feb 13 15:41:58.833935 systemd[1]: Started sshd@7-10.200.8.20:22-10.200.16.10:35242.service - OpenSSH per-connection server daemon (10.200.16.10:35242). Feb 13 15:41:59.477144 sshd[4711]: Accepted publickey for core from 10.200.16.10 port 35242 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:41:59.478722 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:59.482930 systemd-logind[1692]: New session 10 of user core. Feb 13 15:41:59.490785 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:42:00.064877 sshd[4713]: Connection closed by 10.200.16.10 port 35242 Feb 13 15:42:00.065679 sshd-session[4711]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:00.069758 systemd[1]: sshd@7-10.200.8.20:22-10.200.16.10:35242.service: Deactivated successfully. Feb 13 15:42:00.072038 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:42:00.072886 systemd-logind[1692]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:42:00.073930 systemd-logind[1692]: Removed session 10. Feb 13 15:42:05.182964 systemd[1]: Started sshd@8-10.200.8.20:22-10.200.16.10:42568.service - OpenSSH per-connection server daemon (10.200.16.10:42568). Feb 13 15:42:05.822045 sshd[4725]: Accepted publickey for core from 10.200.16.10 port 42568 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:05.823535 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:05.827869 systemd-logind[1692]: New session 11 of user core. Feb 13 15:42:05.831817 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:42:06.334638 sshd[4727]: Connection closed by 10.200.16.10 port 42568 Feb 13 15:42:06.335465 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:06.338897 systemd[1]: sshd@8-10.200.8.20:22-10.200.16.10:42568.service: Deactivated successfully. Feb 13 15:42:06.341782 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:42:06.343731 systemd-logind[1692]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:42:06.345134 systemd-logind[1692]: Removed session 11. Feb 13 15:42:11.458937 systemd[1]: Started sshd@9-10.200.8.20:22-10.200.16.10:42462.service - OpenSSH per-connection server daemon (10.200.16.10:42462). Feb 13 15:42:12.097054 sshd[4739]: Accepted publickey for core from 10.200.16.10 port 42462 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:12.098904 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:12.103093 systemd-logind[1692]: New session 12 of user core. Feb 13 15:42:12.110782 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:42:12.621929 sshd[4741]: Connection closed by 10.200.16.10 port 42462 Feb 13 15:42:12.622869 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:12.627494 systemd[1]: sshd@9-10.200.8.20:22-10.200.16.10:42462.service: Deactivated successfully. Feb 13 15:42:12.630091 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:42:12.631116 systemd-logind[1692]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:42:12.632392 systemd-logind[1692]: Removed session 12. Feb 13 15:42:17.738948 systemd[1]: Started sshd@10-10.200.8.20:22-10.200.16.10:42470.service - OpenSSH per-connection server daemon (10.200.16.10:42470). Feb 13 15:42:18.381844 sshd[4757]: Accepted publickey for core from 10.200.16.10 port 42470 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:18.383318 sshd-session[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:18.387751 systemd-logind[1692]: New session 13 of user core. Feb 13 15:42:18.392778 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:42:18.901320 sshd[4759]: Connection closed by 10.200.16.10 port 42470 Feb 13 15:42:18.902193 sshd-session[4757]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:18.905661 systemd[1]: sshd@10-10.200.8.20:22-10.200.16.10:42470.service: Deactivated successfully. Feb 13 15:42:18.907909 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:42:18.909702 systemd-logind[1692]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:42:18.910860 systemd-logind[1692]: Removed session 13. Feb 13 15:42:19.023954 systemd[1]: Started sshd@11-10.200.8.20:22-10.200.16.10:40364.service - OpenSSH per-connection server daemon (10.200.16.10:40364). Feb 13 15:42:19.668975 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 40364 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:19.670659 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:19.676284 systemd-logind[1692]: New session 14 of user core. Feb 13 15:42:19.680800 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:42:20.220794 sshd[4773]: Connection closed by 10.200.16.10 port 40364 Feb 13 15:42:20.221652 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:20.225452 systemd[1]: sshd@11-10.200.8.20:22-10.200.16.10:40364.service: Deactivated successfully. Feb 13 15:42:20.228907 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:42:20.230971 systemd-logind[1692]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:42:20.232369 systemd-logind[1692]: Removed session 14. Feb 13 15:42:20.343937 systemd[1]: Started sshd@12-10.200.8.20:22-10.200.16.10:40366.service - OpenSSH per-connection server daemon (10.200.16.10:40366). Feb 13 15:42:20.984174 sshd[4782]: Accepted publickey for core from 10.200.16.10 port 40366 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:20.986000 sshd-session[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:20.990814 systemd-logind[1692]: New session 15 of user core. Feb 13 15:42:20.995765 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:42:21.501243 sshd[4784]: Connection closed by 10.200.16.10 port 40366 Feb 13 15:42:21.502323 sshd-session[4782]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:21.507198 systemd[1]: sshd@12-10.200.8.20:22-10.200.16.10:40366.service: Deactivated successfully. Feb 13 15:42:21.509479 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:42:21.510352 systemd-logind[1692]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:42:21.511418 systemd-logind[1692]: Removed session 15. Feb 13 15:42:26.619983 systemd[1]: Started sshd@13-10.200.8.20:22-10.200.16.10:40372.service - OpenSSH per-connection server daemon (10.200.16.10:40372). Feb 13 15:42:27.266372 sshd[4795]: Accepted publickey for core from 10.200.16.10 port 40372 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:27.268029 sshd-session[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:27.272598 systemd-logind[1692]: New session 16 of user core. Feb 13 15:42:27.276790 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:42:27.781491 sshd[4797]: Connection closed by 10.200.16.10 port 40372 Feb 13 15:42:27.782471 sshd-session[4795]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:27.787023 systemd[1]: sshd@13-10.200.8.20:22-10.200.16.10:40372.service: Deactivated successfully. Feb 13 15:42:27.789339 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:42:27.790232 systemd-logind[1692]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:42:27.791257 systemd-logind[1692]: Removed session 16. Feb 13 15:42:32.900935 systemd[1]: Started sshd@14-10.200.8.20:22-10.200.16.10:60184.service - OpenSSH per-connection server daemon (10.200.16.10:60184). Feb 13 15:42:33.540502 sshd[4810]: Accepted publickey for core from 10.200.16.10 port 60184 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:33.542353 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:33.547105 systemd-logind[1692]: New session 17 of user core. Feb 13 15:42:33.551953 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:42:34.053330 sshd[4812]: Connection closed by 10.200.16.10 port 60184 Feb 13 15:42:34.055467 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:34.059585 systemd[1]: sshd@14-10.200.8.20:22-10.200.16.10:60184.service: Deactivated successfully. Feb 13 15:42:34.062203 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:42:34.064590 systemd-logind[1692]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:42:34.065924 systemd-logind[1692]: Removed session 17. Feb 13 15:42:34.175337 systemd[1]: Started sshd@15-10.200.8.20:22-10.200.16.10:60192.service - OpenSSH per-connection server daemon (10.200.16.10:60192). Feb 13 15:42:34.813223 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 60192 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:34.815959 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:34.821641 systemd-logind[1692]: New session 18 of user core. Feb 13 15:42:34.826804 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:42:35.486080 sshd[4824]: Connection closed by 10.200.16.10 port 60192 Feb 13 15:42:35.487103 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:35.491433 systemd[1]: sshd@15-10.200.8.20:22-10.200.16.10:60192.service: Deactivated successfully. Feb 13 15:42:35.493974 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:42:35.494737 systemd-logind[1692]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:42:35.495754 systemd-logind[1692]: Removed session 18. Feb 13 15:42:35.602906 systemd[1]: Started sshd@16-10.200.8.20:22-10.200.16.10:60194.service - OpenSSH per-connection server daemon (10.200.16.10:60194). Feb 13 15:42:36.241390 sshd[4833]: Accepted publickey for core from 10.200.16.10 port 60194 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:36.243159 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:36.248924 systemd-logind[1692]: New session 19 of user core. Feb 13 15:42:36.253784 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:42:38.121860 sshd[4835]: Connection closed by 10.200.16.10 port 60194 Feb 13 15:42:38.122881 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:38.126296 systemd[1]: sshd@16-10.200.8.20:22-10.200.16.10:60194.service: Deactivated successfully. Feb 13 15:42:38.128857 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:42:38.130563 systemd-logind[1692]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:42:38.131706 systemd-logind[1692]: Removed session 19. Feb 13 15:42:38.244498 systemd[1]: Started sshd@17-10.200.8.20:22-10.200.16.10:60202.service - OpenSSH per-connection server daemon (10.200.16.10:60202). Feb 13 15:42:38.888393 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 60202 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:38.889957 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:38.894827 systemd-logind[1692]: New session 20 of user core. Feb 13 15:42:38.903797 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:42:39.508206 sshd[4853]: Connection closed by 10.200.16.10 port 60202 Feb 13 15:42:39.509033 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:39.512988 systemd[1]: sshd@17-10.200.8.20:22-10.200.16.10:60202.service: Deactivated successfully. Feb 13 15:42:39.515209 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:42:39.516276 systemd-logind[1692]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:42:39.517428 systemd-logind[1692]: Removed session 20. Feb 13 15:42:39.631948 systemd[1]: Started sshd@18-10.200.8.20:22-10.200.16.10:33928.service - OpenSSH per-connection server daemon (10.200.16.10:33928). Feb 13 15:42:40.271136 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 33928 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:40.273151 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:40.278890 systemd-logind[1692]: New session 21 of user core. Feb 13 15:42:40.282780 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:42:40.786826 sshd[4864]: Connection closed by 10.200.16.10 port 33928 Feb 13 15:42:40.787763 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:40.791107 systemd[1]: sshd@18-10.200.8.20:22-10.200.16.10:33928.service: Deactivated successfully. Feb 13 15:42:40.793494 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:42:40.795328 systemd-logind[1692]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:42:40.796424 systemd-logind[1692]: Removed session 21. Feb 13 15:42:45.909945 systemd[1]: Started sshd@19-10.200.8.20:22-10.200.16.10:33942.service - OpenSSH per-connection server daemon (10.200.16.10:33942). Feb 13 15:42:46.553961 sshd[4878]: Accepted publickey for core from 10.200.16.10 port 33942 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:46.555606 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:46.565927 systemd-logind[1692]: New session 22 of user core. Feb 13 15:42:46.570783 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:42:47.065277 sshd[4882]: Connection closed by 10.200.16.10 port 33942 Feb 13 15:42:47.066211 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:47.069855 systemd[1]: sshd@19-10.200.8.20:22-10.200.16.10:33942.service: Deactivated successfully. Feb 13 15:42:47.072662 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:42:47.074349 systemd-logind[1692]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:42:47.075597 systemd-logind[1692]: Removed session 22. Feb 13 15:42:52.187958 systemd[1]: Started sshd@20-10.200.8.20:22-10.200.16.10:56232.service - OpenSSH per-connection server daemon (10.200.16.10:56232). Feb 13 15:42:52.833436 sshd[4893]: Accepted publickey for core from 10.200.16.10 port 56232 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:52.835000 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:52.839835 systemd-logind[1692]: New session 23 of user core. Feb 13 15:42:52.842787 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:42:53.350027 sshd[4895]: Connection closed by 10.200.16.10 port 56232 Feb 13 15:42:53.350943 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:53.355528 systemd[1]: sshd@20-10.200.8.20:22-10.200.16.10:56232.service: Deactivated successfully. Feb 13 15:42:53.358308 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:42:53.359115 systemd-logind[1692]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:42:53.360151 systemd-logind[1692]: Removed session 23. Feb 13 15:42:58.471949 systemd[1]: Started sshd@21-10.200.8.20:22-10.200.16.10:56242.service - OpenSSH per-connection server daemon (10.200.16.10:56242). Feb 13 15:42:59.110534 sshd[4907]: Accepted publickey for core from 10.200.16.10 port 56242 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:42:59.112000 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:59.116764 systemd-logind[1692]: New session 24 of user core. Feb 13 15:42:59.119807 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:42:59.620300 sshd[4909]: Connection closed by 10.200.16.10 port 56242 Feb 13 15:42:59.621054 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:59.624915 systemd[1]: sshd@21-10.200.8.20:22-10.200.16.10:56242.service: Deactivated successfully. Feb 13 15:42:59.627158 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:42:59.628077 systemd-logind[1692]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:42:59.629216 systemd-logind[1692]: Removed session 24. Feb 13 15:42:59.740917 systemd[1]: Started sshd@22-10.200.8.20:22-10.200.16.10:52958.service - OpenSSH per-connection server daemon (10.200.16.10:52958). Feb 13 15:43:00.381633 sshd[4920]: Accepted publickey for core from 10.200.16.10 port 52958 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:43:00.383726 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:00.388984 systemd-logind[1692]: New session 25 of user core. Feb 13 15:43:00.392784 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:43:02.070323 containerd[1720]: time="2025-02-13T15:43:02.068843455Z" level=info msg="StopContainer for \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\" with timeout 30 (s)" Feb 13 15:43:02.071098 containerd[1720]: time="2025-02-13T15:43:02.070869305Z" level=info msg="Stop container \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\" with signal terminated" Feb 13 15:43:02.087832 systemd[1]: run-containerd-runc-k8s.io-522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959-runc.HlasKd.mount: Deactivated successfully. Feb 13 15:43:02.091200 systemd[1]: cri-containerd-f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8.scope: Deactivated successfully. Feb 13 15:43:02.109179 containerd[1720]: time="2025-02-13T15:43:02.109121763Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:43:02.118151 containerd[1720]: time="2025-02-13T15:43:02.118107788Z" level=info msg="StopContainer for \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\" with timeout 2 (s)" Feb 13 15:43:02.118849 containerd[1720]: time="2025-02-13T15:43:02.118401695Z" level=info msg="Stop container \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\" with signal terminated" Feb 13 15:43:02.128117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8-rootfs.mount: Deactivated successfully. Feb 13 15:43:02.131225 systemd-networkd[1324]: lxc_health: Link DOWN Feb 13 15:43:02.131664 systemd-networkd[1324]: lxc_health: Lost carrier Feb 13 15:43:02.147202 systemd[1]: cri-containerd-522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959.scope: Deactivated successfully. Feb 13 15:43:02.147892 systemd[1]: cri-containerd-522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959.scope: Consumed 7.513s CPU time. Feb 13 15:43:02.171103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959-rootfs.mount: Deactivated successfully. Feb 13 15:43:02.182726 kubelet[3333]: E0213 15:43:02.182687 3333 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:43:02.617650 containerd[1720]: time="2025-02-13T15:43:02.617269483Z" level=info msg="shim disconnected" id=522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959 namespace=k8s.io Feb 13 15:43:02.617968 containerd[1720]: time="2025-02-13T15:43:02.617658992Z" level=warning msg="cleaning up after shim disconnected" id=522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959 namespace=k8s.io Feb 13 15:43:02.617968 containerd[1720]: time="2025-02-13T15:43:02.617679093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:02.618098 containerd[1720]: time="2025-02-13T15:43:02.617435887Z" level=info msg="shim disconnected" id=f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8 namespace=k8s.io Feb 13 15:43:02.618098 containerd[1720]: time="2025-02-13T15:43:02.618004801Z" level=warning msg="cleaning up after shim disconnected" id=f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8 namespace=k8s.io Feb 13 15:43:02.618098 containerd[1720]: time="2025-02-13T15:43:02.618020201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:02.647071 containerd[1720]: time="2025-02-13T15:43:02.647015927Z" level=info msg="StopContainer for \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\" returns successfully" Feb 13 15:43:02.647946 containerd[1720]: time="2025-02-13T15:43:02.647765846Z" level=info msg="StopPodSandbox for \"4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795\"" Feb 13 15:43:02.647946 containerd[1720]: time="2025-02-13T15:43:02.647816747Z" level=info msg="Container to stop \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:02.648110 containerd[1720]: time="2025-02-13T15:43:02.648048653Z" level=info msg="StopContainer for \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\" returns successfully" Feb 13 15:43:02.650457 containerd[1720]: time="2025-02-13T15:43:02.650215607Z" level=info msg="StopPodSandbox for \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\"" Feb 13 15:43:02.651329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795-shm.mount: Deactivated successfully. Feb 13 15:43:02.653086 containerd[1720]: time="2025-02-13T15:43:02.651213432Z" level=info msg="Container to stop \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:02.653086 containerd[1720]: time="2025-02-13T15:43:02.652031453Z" level=info msg="Container to stop \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:02.653086 containerd[1720]: time="2025-02-13T15:43:02.652048953Z" level=info msg="Container to stop \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:02.653086 containerd[1720]: time="2025-02-13T15:43:02.652240658Z" level=info msg="Container to stop \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:02.653086 containerd[1720]: time="2025-02-13T15:43:02.652259858Z" level=info msg="Container to stop \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:02.661445 systemd[1]: cri-containerd-4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795.scope: Deactivated successfully. Feb 13 15:43:02.673028 systemd[1]: cri-containerd-3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1.scope: Deactivated successfully. Feb 13 15:43:02.703745 containerd[1720]: time="2025-02-13T15:43:02.703038730Z" level=info msg="shim disconnected" id=4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795 namespace=k8s.io Feb 13 15:43:02.703745 containerd[1720]: time="2025-02-13T15:43:02.703114531Z" level=warning msg="cleaning up after shim disconnected" id=4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795 namespace=k8s.io Feb 13 15:43:02.703745 containerd[1720]: time="2025-02-13T15:43:02.703126032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:02.704681 containerd[1720]: time="2025-02-13T15:43:02.704581268Z" level=info msg="shim disconnected" id=3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1 namespace=k8s.io Feb 13 15:43:02.704841 containerd[1720]: time="2025-02-13T15:43:02.704819674Z" level=warning msg="cleaning up after shim disconnected" id=3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1 namespace=k8s.io Feb 13 15:43:02.704938 containerd[1720]: time="2025-02-13T15:43:02.704922877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:02.723253 containerd[1720]: time="2025-02-13T15:43:02.723201934Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:43:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:43:02.724212 containerd[1720]: time="2025-02-13T15:43:02.724059756Z" level=info msg="TearDown network for sandbox \"4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795\" successfully" Feb 13 15:43:02.724212 containerd[1720]: time="2025-02-13T15:43:02.724088256Z" level=info msg="StopPodSandbox for \"4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795\" returns successfully" Feb 13 15:43:02.724453 containerd[1720]: time="2025-02-13T15:43:02.724414865Z" level=info msg="TearDown network for sandbox \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" successfully" Feb 13 15:43:02.724453 containerd[1720]: time="2025-02-13T15:43:02.724439865Z" level=info msg="StopPodSandbox for \"3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1\" returns successfully" Feb 13 15:43:02.853326 kubelet[3333]: I0213 15:43:02.851880 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-run\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853326 kubelet[3333]: I0213 15:43:02.851952 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bba2f478-24b9-4098-acb6-17f09975c8dd-cilium-config-path\") pod \"bba2f478-24b9-4098-acb6-17f09975c8dd\" (UID: \"bba2f478-24b9-4098-acb6-17f09975c8dd\") " Feb 13 15:43:02.853326 kubelet[3333]: I0213 15:43:02.851987 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-xtables-lock\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853326 kubelet[3333]: I0213 15:43:02.852020 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cni-path\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853326 kubelet[3333]: I0213 15:43:02.852054 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-cgroup\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853326 kubelet[3333]: I0213 15:43:02.852094 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-hubble-tls\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853960 kubelet[3333]: I0213 15:43:02.852131 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-bpf-maps\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853960 kubelet[3333]: I0213 15:43:02.852172 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss47n\" (UniqueName: \"kubernetes.io/projected/bba2f478-24b9-4098-acb6-17f09975c8dd-kube-api-access-ss47n\") pod \"bba2f478-24b9-4098-acb6-17f09975c8dd\" (UID: \"bba2f478-24b9-4098-acb6-17f09975c8dd\") " Feb 13 15:43:02.853960 kubelet[3333]: I0213 15:43:02.852208 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/338946dd-71ce-4140-b262-0a1d56f0effe-clustermesh-secrets\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853960 kubelet[3333]: I0213 15:43:02.852243 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-config-path\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853960 kubelet[3333]: I0213 15:43:02.852278 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6bvx\" (UniqueName: \"kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-kube-api-access-f6bvx\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.853960 kubelet[3333]: I0213 15:43:02.852306 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-etc-cni-netd\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.854317 kubelet[3333]: I0213 15:43:02.852339 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-hostproc\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.854317 kubelet[3333]: I0213 15:43:02.852368 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-net\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.854317 kubelet[3333]: I0213 15:43:02.852397 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-kernel\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.854317 kubelet[3333]: I0213 15:43:02.852414 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.854317 kubelet[3333]: I0213 15:43:02.852427 3333 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-lib-modules\") pod \"338946dd-71ce-4140-b262-0a1d56f0effe\" (UID: \"338946dd-71ce-4140-b262-0a1d56f0effe\") " Feb 13 15:43:02.854317 kubelet[3333]: I0213 15:43:02.852500 3333 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-bpf-maps\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.854650 kubelet[3333]: I0213 15:43:02.852528 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.856455 kubelet[3333]: I0213 15:43:02.856179 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bba2f478-24b9-4098-acb6-17f09975c8dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bba2f478-24b9-4098-acb6-17f09975c8dd" (UID: "bba2f478-24b9-4098-acb6-17f09975c8dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:02.856455 kubelet[3333]: I0213 15:43:02.856261 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.856455 kubelet[3333]: I0213 15:43:02.856296 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cni-path" (OuterVolumeSpecName: "cni-path") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.856455 kubelet[3333]: I0213 15:43:02.856323 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.858544 kubelet[3333]: I0213 15:43:02.858013 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.859041 kubelet[3333]: I0213 15:43:02.858920 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.859373 kubelet[3333]: I0213 15:43:02.859238 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.862062 kubelet[3333]: I0213 15:43:02.861053 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.862062 kubelet[3333]: I0213 15:43:02.861094 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-hostproc" (OuterVolumeSpecName: "hostproc") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:02.863257 kubelet[3333]: I0213 15:43:02.863231 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:02.864700 kubelet[3333]: I0213 15:43:02.864672 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-kube-api-access-f6bvx" (OuterVolumeSpecName: "kube-api-access-f6bvx") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "kube-api-access-f6bvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:02.864977 kubelet[3333]: I0213 15:43:02.864956 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba2f478-24b9-4098-acb6-17f09975c8dd-kube-api-access-ss47n" (OuterVolumeSpecName: "kube-api-access-ss47n") pod "bba2f478-24b9-4098-acb6-17f09975c8dd" (UID: "bba2f478-24b9-4098-acb6-17f09975c8dd"). InnerVolumeSpecName "kube-api-access-ss47n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:02.865945 kubelet[3333]: I0213 15:43:02.865899 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338946dd-71ce-4140-b262-0a1d56f0effe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:43:02.866113 kubelet[3333]: I0213 15:43:02.866082 3333 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "338946dd-71ce-4140-b262-0a1d56f0effe" (UID: "338946dd-71ce-4140-b262-0a1d56f0effe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.952899 3333 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-cgroup\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.952952 3333 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-hubble-tls\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.952972 3333 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-config-path\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.952994 3333 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ss47n\" (UniqueName: \"kubernetes.io/projected/bba2f478-24b9-4098-acb6-17f09975c8dd-kube-api-access-ss47n\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.953011 3333 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/338946dd-71ce-4140-b262-0a1d56f0effe-clustermesh-secrets\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.953028 3333 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f6bvx\" (UniqueName: \"kubernetes.io/projected/338946dd-71ce-4140-b262-0a1d56f0effe-kube-api-access-f6bvx\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.953044 3333 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-etc-cni-netd\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953070 kubelet[3333]: I0213 15:43:02.953060 3333 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-lib-modules\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953718 kubelet[3333]: I0213 15:43:02.953077 3333 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-hostproc\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953718 kubelet[3333]: I0213 15:43:02.953098 3333 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-net\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953718 kubelet[3333]: I0213 15:43:02.953113 3333 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-host-proc-sys-kernel\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953718 kubelet[3333]: I0213 15:43:02.953131 3333 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cilium-run\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953718 kubelet[3333]: I0213 15:43:02.953149 3333 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bba2f478-24b9-4098-acb6-17f09975c8dd-cilium-config-path\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953718 kubelet[3333]: I0213 15:43:02.953185 3333 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-xtables-lock\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:02.953718 kubelet[3333]: I0213 15:43:02.953202 3333 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/338946dd-71ce-4140-b262-0a1d56f0effe-cni-path\") on node \"ci-4152.2.1-a-02a9d39241\" DevicePath \"\"" Feb 13 15:43:03.076508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fe81224f4383f9ba6810e286451fb0df7d21afecbd81d786ebba32e447ae795-rootfs.mount: Deactivated successfully. Feb 13 15:43:03.076668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1-rootfs.mount: Deactivated successfully. Feb 13 15:43:03.076772 systemd[1]: var-lib-kubelet-pods-bba2f478\x2d24b9\x2d4098\x2dacb6\x2d17f09975c8dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dss47n.mount: Deactivated successfully. Feb 13 15:43:03.076885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3269bfa30e6cb4fbeb491d96ffeaf6247b7d970b35509541274bf97a12e6eaa1-shm.mount: Deactivated successfully. Feb 13 15:43:03.076994 systemd[1]: var-lib-kubelet-pods-338946dd\x2d71ce\x2d4140\x2db262\x2d0a1d56f0effe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6bvx.mount: Deactivated successfully. Feb 13 15:43:03.077099 systemd[1]: var-lib-kubelet-pods-338946dd\x2d71ce\x2d4140\x2db262\x2d0a1d56f0effe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:43:03.077207 systemd[1]: var-lib-kubelet-pods-338946dd\x2d71ce\x2d4140\x2db262\x2d0a1d56f0effe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:43:03.624286 kubelet[3333]: I0213 15:43:03.622243 3333 scope.go:117] "RemoveContainer" containerID="f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8" Feb 13 15:43:03.626159 containerd[1720]: time="2025-02-13T15:43:03.626112549Z" level=info msg="RemoveContainer for \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\"" Feb 13 15:43:03.632666 systemd[1]: Removed slice kubepods-besteffort-podbba2f478_24b9_4098_acb6_17f09975c8dd.slice - libcontainer container kubepods-besteffort-podbba2f478_24b9_4098_acb6_17f09975c8dd.slice. Feb 13 15:43:03.636873 systemd[1]: Removed slice kubepods-burstable-pod338946dd_71ce_4140_b262_0a1d56f0effe.slice - libcontainer container kubepods-burstable-pod338946dd_71ce_4140_b262_0a1d56f0effe.slice. Feb 13 15:43:03.637184 systemd[1]: kubepods-burstable-pod338946dd_71ce_4140_b262_0a1d56f0effe.slice: Consumed 7.604s CPU time. Feb 13 15:43:03.642303 containerd[1720]: time="2025-02-13T15:43:03.642264158Z" level=info msg="RemoveContainer for \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\" returns successfully" Feb 13 15:43:03.642579 kubelet[3333]: I0213 15:43:03.642555 3333 scope.go:117] "RemoveContainer" containerID="f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8" Feb 13 15:43:03.642859 containerd[1720]: time="2025-02-13T15:43:03.642814372Z" level=error msg="ContainerStatus for \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\": not found" Feb 13 15:43:03.643023 kubelet[3333]: E0213 15:43:03.642996 3333 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\": not found" containerID="f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8" Feb 13 15:43:03.643129 kubelet[3333]: I0213 15:43:03.643119 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8"} err="failed to get container status \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8d3a67005b9b5160c9f5e412e7854eceaf66fea72678f6478e9d2af418285a8\": not found" Feb 13 15:43:03.643178 kubelet[3333]: I0213 15:43:03.643137 3333 scope.go:117] "RemoveContainer" containerID="522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959" Feb 13 15:43:03.644316 containerd[1720]: time="2025-02-13T15:43:03.644281209Z" level=info msg="RemoveContainer for \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\"" Feb 13 15:43:03.654204 containerd[1720]: time="2025-02-13T15:43:03.654085058Z" level=info msg="RemoveContainer for \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\" returns successfully" Feb 13 15:43:03.654447 kubelet[3333]: I0213 15:43:03.654424 3333 scope.go:117] "RemoveContainer" containerID="b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92" Feb 13 15:43:03.655653 containerd[1720]: time="2025-02-13T15:43:03.655549295Z" level=info msg="RemoveContainer for \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\"" Feb 13 15:43:03.665389 containerd[1720]: time="2025-02-13T15:43:03.665352744Z" level=info msg="RemoveContainer for \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\" returns successfully" Feb 13 15:43:03.665630 kubelet[3333]: I0213 15:43:03.665565 3333 scope.go:117] "RemoveContainer" containerID="35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388" Feb 13 15:43:03.666672 containerd[1720]: time="2025-02-13T15:43:03.666642377Z" level=info msg="RemoveContainer for \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\"" Feb 13 15:43:03.676703 containerd[1720]: time="2025-02-13T15:43:03.676673931Z" level=info msg="RemoveContainer for \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\" returns successfully" Feb 13 15:43:03.676893 kubelet[3333]: I0213 15:43:03.676859 3333 scope.go:117] "RemoveContainer" containerID="a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4" Feb 13 15:43:03.677950 containerd[1720]: time="2025-02-13T15:43:03.677924263Z" level=info msg="RemoveContainer for \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\"" Feb 13 15:43:03.691049 containerd[1720]: time="2025-02-13T15:43:03.691014695Z" level=info msg="RemoveContainer for \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\" returns successfully" Feb 13 15:43:03.691298 kubelet[3333]: I0213 15:43:03.691214 3333 scope.go:117] "RemoveContainer" containerID="4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58" Feb 13 15:43:03.692322 containerd[1720]: time="2025-02-13T15:43:03.692289127Z" level=info msg="RemoveContainer for \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\"" Feb 13 15:43:03.705271 containerd[1720]: time="2025-02-13T15:43:03.705236455Z" level=info msg="RemoveContainer for \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\" returns successfully" Feb 13 15:43:03.705483 kubelet[3333]: I0213 15:43:03.705420 3333 scope.go:117] "RemoveContainer" containerID="522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959" Feb 13 15:43:03.705715 containerd[1720]: time="2025-02-13T15:43:03.705677766Z" level=error msg="ContainerStatus for \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\": not found" Feb 13 15:43:03.705886 kubelet[3333]: E0213 15:43:03.705862 3333 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\": not found" containerID="522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959" Feb 13 15:43:03.705963 kubelet[3333]: I0213 15:43:03.705911 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959"} err="failed to get container status \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\": rpc error: code = NotFound desc = an error occurred when try to find container \"522bdcd8da5154e656182d3e8c2b46db05d2139d6a0b06a863b9d5b0b44f6959\": not found" Feb 13 15:43:03.705963 kubelet[3333]: I0213 15:43:03.705927 3333 scope.go:117] "RemoveContainer" containerID="b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92" Feb 13 15:43:03.706194 containerd[1720]: time="2025-02-13T15:43:03.706127678Z" level=error msg="ContainerStatus for \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\": not found" Feb 13 15:43:03.706349 kubelet[3333]: E0213 15:43:03.706328 3333 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\": not found" containerID="b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92" Feb 13 15:43:03.706439 kubelet[3333]: I0213 15:43:03.706368 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92"} err="failed to get container status \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\": rpc error: code = NotFound desc = an error occurred when try to find container \"b13a873aa16bf6383371daa4ef953e2db22ffae4070bdacb0320d6d47c39ba92\": not found" Feb 13 15:43:03.706439 kubelet[3333]: I0213 15:43:03.706388 3333 scope.go:117] "RemoveContainer" containerID="35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388" Feb 13 15:43:03.706647 containerd[1720]: time="2025-02-13T15:43:03.706565489Z" level=error msg="ContainerStatus for \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\": not found" Feb 13 15:43:03.706768 kubelet[3333]: E0213 15:43:03.706726 3333 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\": not found" containerID="35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388" Feb 13 15:43:03.706826 kubelet[3333]: I0213 15:43:03.706783 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388"} err="failed to get container status \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\": rpc error: code = NotFound desc = an error occurred when try to find container \"35ad2fcb48ddee1ee3769f3d4c3d950b7f46a912413211ef4d6a48b8bd91e388\": not found" Feb 13 15:43:03.706826 kubelet[3333]: I0213 15:43:03.706797 3333 scope.go:117] "RemoveContainer" containerID="a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4" Feb 13 15:43:03.707114 containerd[1720]: time="2025-02-13T15:43:03.707075602Z" level=error msg="ContainerStatus for \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\": not found" Feb 13 15:43:03.707252 kubelet[3333]: E0213 15:43:03.707226 3333 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\": not found" containerID="a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4" Feb 13 15:43:03.707328 kubelet[3333]: I0213 15:43:03.707270 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4"} err="failed to get container status \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a40dd445b1767941ae42f72a5a1adad97fcaaadd5739feb0e2ca2c19c51c4db4\": not found" Feb 13 15:43:03.707328 kubelet[3333]: I0213 15:43:03.707288 3333 scope.go:117] "RemoveContainer" containerID="4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58" Feb 13 15:43:03.707519 containerd[1720]: time="2025-02-13T15:43:03.707471712Z" level=error msg="ContainerStatus for \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\": not found" Feb 13 15:43:03.707660 kubelet[3333]: E0213 15:43:03.707638 3333 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\": not found" containerID="4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58" Feb 13 15:43:03.707726 kubelet[3333]: I0213 15:43:03.707677 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58"} err="failed to get container status \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a0008fc7ea8b1315d80b69dcb225b5ae014227e840f31854829003ad0919b58\": not found" Feb 13 15:43:04.063799 kubelet[3333]: I0213 15:43:04.063678 3333 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" path="/var/lib/kubelet/pods/338946dd-71ce-4140-b262-0a1d56f0effe/volumes" Feb 13 15:43:04.064641 kubelet[3333]: I0213 15:43:04.064477 3333 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bba2f478-24b9-4098-acb6-17f09975c8dd" path="/var/lib/kubelet/pods/bba2f478-24b9-4098-acb6-17f09975c8dd/volumes" Feb 13 15:43:04.073039 sshd[4922]: Connection closed by 10.200.16.10 port 52958 Feb 13 15:43:04.073757 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:04.076833 systemd[1]: sshd@22-10.200.8.20:22-10.200.16.10:52958.service: Deactivated successfully. Feb 13 15:43:04.079111 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:43:04.080875 systemd-logind[1692]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:43:04.082094 systemd-logind[1692]: Removed session 25. Feb 13 15:43:04.189922 systemd[1]: Started sshd@23-10.200.8.20:22-10.200.16.10:52966.service - OpenSSH per-connection server daemon (10.200.16.10:52966). Feb 13 15:43:04.827736 sshd[5088]: Accepted publickey for core from 10.200.16.10 port 52966 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:43:04.829215 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:04.833427 systemd-logind[1692]: New session 26 of user core. Feb 13 15:43:04.842776 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:43:05.636884 kubelet[3333]: I0213 15:43:05.636436 3333 topology_manager.go:215] "Topology Admit Handler" podUID="57842b3e-f393-46fb-96b9-adccbf1ab5f6" podNamespace="kube-system" podName="cilium-vrcb9" Feb 13 15:43:05.639478 kubelet[3333]: E0213 15:43:05.639095 3333 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" containerName="clean-cilium-state" Feb 13 15:43:05.639478 kubelet[3333]: E0213 15:43:05.639124 3333 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" containerName="mount-cgroup" Feb 13 15:43:05.639478 kubelet[3333]: E0213 15:43:05.639135 3333 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" containerName="apply-sysctl-overwrites" Feb 13 15:43:05.639478 kubelet[3333]: E0213 15:43:05.639146 3333 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" containerName="mount-bpf-fs" Feb 13 15:43:05.639478 kubelet[3333]: E0213 15:43:05.639156 3333 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bba2f478-24b9-4098-acb6-17f09975c8dd" containerName="cilium-operator" Feb 13 15:43:05.639478 kubelet[3333]: E0213 15:43:05.639166 3333 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" containerName="cilium-agent" Feb 13 15:43:05.639478 kubelet[3333]: I0213 15:43:05.639204 3333 memory_manager.go:354] "RemoveStaleState removing state" podUID="338946dd-71ce-4140-b262-0a1d56f0effe" containerName="cilium-agent" Feb 13 15:43:05.639478 kubelet[3333]: I0213 15:43:05.639215 3333 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba2f478-24b9-4098-acb6-17f09975c8dd" containerName="cilium-operator" Feb 13 15:43:05.647365 kubelet[3333]: W0213 15:43:05.646375 3333 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152.2.1-a-02a9d39241" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.1-a-02a9d39241' and this object Feb 13 15:43:05.647365 kubelet[3333]: E0213 15:43:05.646421 3333 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152.2.1-a-02a9d39241" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.1-a-02a9d39241' and this object Feb 13 15:43:05.647365 kubelet[3333]: W0213 15:43:05.646491 3333 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152.2.1-a-02a9d39241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.1-a-02a9d39241' and this object Feb 13 15:43:05.647365 kubelet[3333]: E0213 15:43:05.646510 3333 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152.2.1-a-02a9d39241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.1-a-02a9d39241' and this object Feb 13 15:43:05.647365 kubelet[3333]: W0213 15:43:05.646564 3333 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152.2.1-a-02a9d39241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.1-a-02a9d39241' and this object Feb 13 15:43:05.648047 kubelet[3333]: E0213 15:43:05.646576 3333 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152.2.1-a-02a9d39241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.1-a-02a9d39241' and this object Feb 13 15:43:05.653240 systemd[1]: Created slice kubepods-burstable-pod57842b3e_f393_46fb_96b9_adccbf1ab5f6.slice - libcontainer container kubepods-burstable-pod57842b3e_f393_46fb_96b9_adccbf1ab5f6.slice. Feb 13 15:43:05.695732 sshd[5091]: Connection closed by 10.200.16.10 port 52966 Feb 13 15:43:05.697113 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:05.699949 systemd[1]: sshd@23-10.200.8.20:22-10.200.16.10:52966.service: Deactivated successfully. Feb 13 15:43:05.702387 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:43:05.704084 systemd-logind[1692]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:43:05.705375 systemd-logind[1692]: Removed session 26. Feb 13 15:43:05.770430 kubelet[3333]: I0213 15:43:05.770369 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57842b3e-f393-46fb-96b9-adccbf1ab5f6-cilium-config-path\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.770430 kubelet[3333]: I0213 15:43:05.770437 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-cni-path\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.770720 kubelet[3333]: I0213 15:43:05.770473 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qck7m\" (UniqueName: \"kubernetes.io/projected/57842b3e-f393-46fb-96b9-adccbf1ab5f6-kube-api-access-qck7m\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.770720 kubelet[3333]: I0213 15:43:05.770501 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-bpf-maps\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.770720 kubelet[3333]: I0213 15:43:05.770532 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-cilium-cgroup\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.770720 kubelet[3333]: I0213 15:43:05.770563 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57842b3e-f393-46fb-96b9-adccbf1ab5f6-hubble-tls\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.770720 kubelet[3333]: I0213 15:43:05.770593 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-hostproc\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.770720 kubelet[3333]: I0213 15:43:05.770639 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-etc-cni-netd\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.771031 kubelet[3333]: I0213 15:43:05.770671 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-lib-modules\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.771031 kubelet[3333]: I0213 15:43:05.770708 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57842b3e-f393-46fb-96b9-adccbf1ab5f6-clustermesh-secrets\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.771031 kubelet[3333]: I0213 15:43:05.770743 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-xtables-lock\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.771031 kubelet[3333]: I0213 15:43:05.770783 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-cilium-run\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.771031 kubelet[3333]: I0213 15:43:05.770819 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-host-proc-sys-net\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.771031 kubelet[3333]: I0213 15:43:05.770856 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57842b3e-f393-46fb-96b9-adccbf1ab5f6-host-proc-sys-kernel\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.771396 kubelet[3333]: I0213 15:43:05.770894 3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/57842b3e-f393-46fb-96b9-adccbf1ab5f6-cilium-ipsec-secrets\") pod \"cilium-vrcb9\" (UID: \"57842b3e-f393-46fb-96b9-adccbf1ab5f6\") " pod="kube-system/cilium-vrcb9" Feb 13 15:43:05.810924 systemd[1]: Started sshd@24-10.200.8.20:22-10.200.16.10:52972.service - OpenSSH per-connection server daemon (10.200.16.10:52972). Feb 13 15:43:06.452874 sshd[5103]: Accepted publickey for core from 10.200.16.10 port 52972 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:43:06.454403 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:06.459528 systemd-logind[1692]: New session 27 of user core. Feb 13 15:43:06.463768 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:43:06.679787 kubelet[3333]: I0213 15:43:06.678326 3333 setters.go:568] "Node became not ready" node="ci-4152.2.1-a-02a9d39241" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:43:06Z","lastTransitionTime":"2025-02-13T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:43:06.872418 kubelet[3333]: E0213 15:43:06.872374 3333 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 15:43:06.872638 kubelet[3333]: E0213 15:43:06.872539 3333 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57842b3e-f393-46fb-96b9-adccbf1ab5f6-clustermesh-secrets podName:57842b3e-f393-46fb-96b9-adccbf1ab5f6 nodeName:}" failed. No retries permitted until 2025-02-13 15:43:07.372501077 +0000 UTC m=+215.437993513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/57842b3e-f393-46fb-96b9-adccbf1ab5f6-clustermesh-secrets") pod "cilium-vrcb9" (UID: "57842b3e-f393-46fb-96b9-adccbf1ab5f6") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:43:06.904475 sshd[5107]: Connection closed by 10.200.16.10 port 52972 Feb 13 15:43:06.905305 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:06.908761 systemd[1]: sshd@24-10.200.8.20:22-10.200.16.10:52972.service: Deactivated successfully. Feb 13 15:43:06.911103 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:43:06.912947 systemd-logind[1692]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:43:06.914590 systemd-logind[1692]: Removed session 27. Feb 13 15:43:07.022924 systemd[1]: Started sshd@25-10.200.8.20:22-10.200.16.10:52980.service - OpenSSH per-connection server daemon (10.200.16.10:52980). Feb 13 15:43:07.183939 kubelet[3333]: E0213 15:43:07.183792 3333 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:43:07.462199 containerd[1720]: time="2025-02-13T15:43:07.461936925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrcb9,Uid:57842b3e-f393-46fb-96b9-adccbf1ab5f6,Namespace:kube-system,Attempt:0,}" Feb 13 15:43:07.521250 containerd[1720]: time="2025-02-13T15:43:07.521161627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:07.521250 containerd[1720]: time="2025-02-13T15:43:07.521206128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:07.521250 containerd[1720]: time="2025-02-13T15:43:07.521219628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:07.521569 containerd[1720]: time="2025-02-13T15:43:07.521336531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:07.543437 systemd[1]: run-containerd-runc-k8s.io-7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7-runc.FxiMf3.mount: Deactivated successfully. Feb 13 15:43:07.551780 systemd[1]: Started cri-containerd-7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7.scope - libcontainer container 7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7. Feb 13 15:43:07.575433 containerd[1720]: time="2025-02-13T15:43:07.575361601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrcb9,Uid:57842b3e-f393-46fb-96b9-adccbf1ab5f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\"" Feb 13 15:43:07.578369 containerd[1720]: time="2025-02-13T15:43:07.578332877Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:43:07.621278 containerd[1720]: time="2025-02-13T15:43:07.621224164Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45\"" Feb 13 15:43:07.623072 containerd[1720]: time="2025-02-13T15:43:07.621787879Z" level=info msg="StartContainer for \"b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45\"" Feb 13 15:43:07.647764 systemd[1]: Started cri-containerd-b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45.scope - libcontainer container b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45. Feb 13 15:43:07.666828 sshd[5115]: Accepted publickey for core from 10.200.16.10 port 52980 ssh2: RSA SHA256:jR6YNxChJdNaaBkYEzZuybY0SXwyQCXji0xJnFp2zmQ Feb 13 15:43:07.668956 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:07.676296 systemd-logind[1692]: New session 28 of user core. Feb 13 15:43:07.679628 containerd[1720]: time="2025-02-13T15:43:07.679322038Z" level=info msg="StartContainer for \"b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45\" returns successfully" Feb 13 15:43:07.684207 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:43:07.686715 systemd[1]: cri-containerd-b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45.scope: Deactivated successfully. Feb 13 15:43:07.774707 containerd[1720]: time="2025-02-13T15:43:07.774365748Z" level=info msg="shim disconnected" id=b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45 namespace=k8s.io Feb 13 15:43:07.774707 containerd[1720]: time="2025-02-13T15:43:07.774537152Z" level=warning msg="cleaning up after shim disconnected" id=b7a85af42a5800adddbebdb65445d3485046151d575e6e087ac0fbf4ef364d45 namespace=k8s.io Feb 13 15:43:07.774707 containerd[1720]: time="2025-02-13T15:43:07.774551953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:08.651976 containerd[1720]: time="2025-02-13T15:43:08.651597295Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:43:08.704629 containerd[1720]: time="2025-02-13T15:43:08.704448435Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463\"" Feb 13 15:43:08.706337 containerd[1720]: time="2025-02-13T15:43:08.705176853Z" level=info msg="StartContainer for \"4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463\"" Feb 13 15:43:08.769004 systemd[1]: Started cri-containerd-4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463.scope - libcontainer container 4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463. Feb 13 15:43:08.806394 containerd[1720]: time="2025-02-13T15:43:08.806253417Z" level=info msg="StartContainer for \"4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463\" returns successfully" Feb 13 15:43:08.811141 systemd[1]: cri-containerd-4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463.scope: Deactivated successfully. Feb 13 15:43:08.848181 containerd[1720]: time="2025-02-13T15:43:08.848105678Z" level=info msg="shim disconnected" id=4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463 namespace=k8s.io Feb 13 15:43:08.848181 containerd[1720]: time="2025-02-13T15:43:08.848174780Z" level=warning msg="cleaning up after shim disconnected" id=4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463 namespace=k8s.io Feb 13 15:43:08.848181 containerd[1720]: time="2025-02-13T15:43:08.848186580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:09.388604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4247f77fa164382c0f30bf298406c64e0670d82811910e086984cc38d0c44463-rootfs.mount: Deactivated successfully. Feb 13 15:43:09.656747 containerd[1720]: time="2025-02-13T15:43:09.656433077Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:43:09.713051 containerd[1720]: time="2025-02-13T15:43:09.712917610Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7\"" Feb 13 15:43:09.713600 containerd[1720]: time="2025-02-13T15:43:09.713442923Z" level=info msg="StartContainer for \"ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7\"" Feb 13 15:43:09.752790 systemd[1]: Started cri-containerd-ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7.scope - libcontainer container ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7. Feb 13 15:43:09.783682 systemd[1]: cri-containerd-ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7.scope: Deactivated successfully. Feb 13 15:43:09.786920 containerd[1720]: time="2025-02-13T15:43:09.785436449Z" level=info msg="StartContainer for \"ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7\" returns successfully" Feb 13 15:43:09.820972 containerd[1720]: time="2025-02-13T15:43:09.820897848Z" level=info msg="shim disconnected" id=ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7 namespace=k8s.io Feb 13 15:43:09.820972 containerd[1720]: time="2025-02-13T15:43:09.820966950Z" level=warning msg="cleaning up after shim disconnected" id=ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7 namespace=k8s.io Feb 13 15:43:09.820972 containerd[1720]: time="2025-02-13T15:43:09.820977550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:10.388094 systemd[1]: run-containerd-runc-k8s.io-ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7-runc.BLSc7d.mount: Deactivated successfully. Feb 13 15:43:10.388228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebcc80c32e2e8a4594b0b9a83054736ddee1d9c6031bb97a9a0d9412334935a7-rootfs.mount: Deactivated successfully. Feb 13 15:43:10.663728 containerd[1720]: time="2025-02-13T15:43:10.662925002Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:43:10.707879 containerd[1720]: time="2025-02-13T15:43:10.707829440Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6\"" Feb 13 15:43:10.710929 containerd[1720]: time="2025-02-13T15:43:10.710579710Z" level=info msg="StartContainer for \"2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6\"" Feb 13 15:43:10.750744 systemd[1]: Started cri-containerd-2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6.scope - libcontainer container 2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6. Feb 13 15:43:10.773883 systemd[1]: cri-containerd-2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6.scope: Deactivated successfully. Feb 13 15:43:10.779328 containerd[1720]: time="2025-02-13T15:43:10.778385730Z" level=info msg="StartContainer for \"2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6\" returns successfully" Feb 13 15:43:10.815445 containerd[1720]: time="2025-02-13T15:43:10.815372268Z" level=info msg="shim disconnected" id=2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6 namespace=k8s.io Feb 13 15:43:10.815711 containerd[1720]: time="2025-02-13T15:43:10.815627074Z" level=warning msg="cleaning up after shim disconnected" id=2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6 namespace=k8s.io Feb 13 15:43:10.815711 containerd[1720]: time="2025-02-13T15:43:10.815653475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:11.388885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2398db4217be762a86d88453f8452181b655900096d07adea5067264953b53c6-rootfs.mount: Deactivated successfully. Feb 13 15:43:11.666911 containerd[1720]: time="2025-02-13T15:43:11.666492245Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:43:11.701296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884051978.mount: Deactivated successfully. Feb 13 15:43:11.711636 containerd[1720]: time="2025-02-13T15:43:11.710803527Z" level=info msg="CreateContainer within sandbox \"7bc4706514e0932e3826f6f9e0cf548004fa3533afe876c58912242b3e618db7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a9b5c89245af60215333a321d9f007f70e2abee0b79413a48a32e38b75394f3\"" Feb 13 15:43:11.712211 containerd[1720]: time="2025-02-13T15:43:11.712174064Z" level=info msg="StartContainer for \"4a9b5c89245af60215333a321d9f007f70e2abee0b79413a48a32e38b75394f3\"" Feb 13 15:43:11.749760 systemd[1]: Started cri-containerd-4a9b5c89245af60215333a321d9f007f70e2abee0b79413a48a32e38b75394f3.scope - libcontainer container 4a9b5c89245af60215333a321d9f007f70e2abee0b79413a48a32e38b75394f3. Feb 13 15:43:11.780787 containerd[1720]: time="2025-02-13T15:43:11.780739493Z" level=info msg="StartContainer for \"4a9b5c89245af60215333a321d9f007f70e2abee0b79413a48a32e38b75394f3\" returns successfully" Feb 13 15:43:12.340647 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:43:12.686059 kubelet[3333]: I0213 15:43:12.686022 3333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vrcb9" podStartSLOduration=7.68596034 podStartE2EDuration="7.68596034s" podCreationTimestamp="2025-02-13 15:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:43:12.68560063 +0000 UTC m=+220.751092966" watchObservedRunningTime="2025-02-13 15:43:12.68596034 +0000 UTC m=+220.751452676" Feb 13 15:43:15.159914 systemd-networkd[1324]: lxc_health: Link UP Feb 13 15:43:15.184284 systemd-networkd[1324]: lxc_health: Gained carrier Feb 13 15:43:16.638392 systemd[1]: run-containerd-runc-k8s.io-4a9b5c89245af60215333a321d9f007f70e2abee0b79413a48a32e38b75394f3-runc.jVQRyD.mount: Deactivated successfully. Feb 13 15:43:17.056910 systemd-networkd[1324]: lxc_health: Gained IPv6LL Feb 13 15:43:18.899550 systemd[1]: run-containerd-runc-k8s.io-4a9b5c89245af60215333a321d9f007f70e2abee0b79413a48a32e38b75394f3-runc.J2hWAz.mount: Deactivated successfully. Feb 13 15:43:21.225826 sshd[5194]: Connection closed by 10.200.16.10 port 52980 Feb 13 15:43:21.226830 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:21.231181 systemd[1]: sshd@25-10.200.8.20:22-10.200.16.10:52980.service: Deactivated successfully. Feb 13 15:43:21.233295 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:43:21.234215 systemd-logind[1692]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:43:21.235220 systemd-logind[1692]: Removed session 28.