Jan 14 13:18:15.126637 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:18:15.126670 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:18:15.126679 kernel: BIOS-provided physical RAM map: Jan 14 13:18:15.126690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:18:15.126695 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:18:15.126702 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:18:15.126712 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:18:15.126722 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:18:15.126730 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:18:15.126737 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:18:15.126743 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:18:15.126752 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:18:15.126758 kernel: NX (Execute Disable) protection: active Jan 14 13:18:15.126764 kernel: APIC: Static calls initialized Jan 14 13:18:15.126778 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:18:15.126785 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:18:15.126796 kernel: random: crng init done Jan 14 13:18:15.126803 kernel: secureboot: Secure boot disabled Jan 14 13:18:15.126810 kernel: SMBIOS 3.1.0 present. Jan 14 13:18:15.126821 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:18:15.126828 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:18:15.126835 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:18:15.126841 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:18:15.126848 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:18:15.126859 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:18:15.126868 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:18:15.126875 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:18:15.126884 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:18:15.126893 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:18:15.126900 kernel: tsc: Detected 2593.905 MHz processor Jan 14 13:18:15.126910 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:18:15.126917 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:18:15.126925 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:18:15.126937 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:18:15.126944 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:18:15.126955 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:18:15.126962 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:18:15.126969 kernel: Using GB pages for direct mapping Jan 14 13:18:15.126979 kernel: ACPI: Early table checksum verification disabled Jan 14 13:18:15.126986 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:18:15.127001 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127011 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127020 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:18:15.127029 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:18:15.127036 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127047 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127055 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127064 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127072 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127083 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127090 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127098 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:18:15.127109 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:18:15.127117 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:18:15.127129 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:18:15.127137 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:18:15.127149 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:18:15.127157 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:18:15.127168 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:18:15.127176 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:18:15.127185 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:18:15.127194 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:18:15.127202 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:18:15.127213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:18:15.127224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:18:15.127233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:18:15.127242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:18:15.127260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:18:15.127269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:18:15.127276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:18:15.127284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:18:15.127294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:18:15.127302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:18:15.127315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:18:15.127322 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:18:15.127331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:18:15.127340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:18:15.127348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:18:15.127359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:18:15.127367 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:18:15.127375 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:18:15.127385 kernel: Zone ranges: Jan 14 13:18:15.127395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:18:15.127405 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:18:15.127413 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:18:15.127422 kernel: Movable zone start for each node Jan 14 13:18:15.127431 kernel: Early memory node ranges Jan 14 13:18:15.127439 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:18:15.127450 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:18:15.127457 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:18:15.127466 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:18:15.127478 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:18:15.127486 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:18:15.127496 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:18:15.127505 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:18:15.127514 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:18:15.127524 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:18:15.127535 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:18:15.127548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:18:15.127563 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:18:15.127586 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:18:15.127605 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:18:15.127623 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:18:15.127640 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:18:15.127655 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:18:15.127673 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:18:15.127687 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:18:15.127701 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:18:15.127715 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:18:15.127731 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:18:15.127747 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:18:15.127767 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:18:15.127784 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:18:15.127800 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:18:15.127816 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:18:15.127832 kernel: Fallback order for Node 0: 0 Jan 14 13:18:15.127849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:18:15.127870 kernel: Policy zone: Normal Jan 14 13:18:15.127898 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:18:15.127917 kernel: software IO TLB: area num 2. Jan 14 13:18:15.127937 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:18:15.127954 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:18:15.127969 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:18:15.127987 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:18:15.128002 kernel: Dynamic Preempt: voluntary Jan 14 13:18:15.128018 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:18:15.128035 kernel: rcu: RCU event tracing is enabled. Jan 14 13:18:15.128055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:18:15.128076 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:18:15.128093 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:18:15.128111 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:18:15.128129 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:18:15.128148 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:18:15.128169 kernel: Using NULL legacy PIC Jan 14 13:18:15.128185 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:18:15.128204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:18:15.128220 kernel: Console: colour dummy device 80x25 Jan 14 13:18:15.128237 kernel: printk: console [tty1] enabled Jan 14 13:18:15.128262 kernel: printk: console [ttyS0] enabled Jan 14 13:18:15.128278 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:18:15.128290 kernel: ACPI: Core revision 20230628 Jan 14 13:18:15.128302 kernel: Failed to register legacy timer interrupt Jan 14 13:18:15.128314 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:18:15.128331 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:18:15.128344 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:18:15.128358 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:18:15.128371 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:18:15.128384 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:18:15.128397 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:18:15.128410 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:18:15.128424 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:18:15.128438 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 14 13:18:15.128456 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:18:15.128470 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:18:15.128483 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:18:15.128497 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:18:15.128512 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:18:15.128527 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:18:15.128542 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:18:15.128556 kernel: RETBleed: Vulnerable Jan 14 13:18:15.128571 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:18:15.128585 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:18:15.128602 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:18:15.128616 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:18:15.128629 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:18:15.128642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:18:15.128655 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:18:15.128668 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:18:15.128682 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:18:15.128696 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:18:15.128710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:18:15.128723 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:18:15.128736 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:18:15.128753 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:18:15.128766 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:18:15.128781 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:18:15.128794 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:18:15.128808 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:18:15.128822 kernel: landlock: Up and running. Jan 14 13:18:15.128834 kernel: SELinux: Initializing. Jan 14 13:18:15.128847 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.128861 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.128875 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:18:15.128889 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:18:15.128907 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:18:15.128921 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:18:15.128935 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:18:15.128949 kernel: signal: max sigframe size: 3632 Jan 14 13:18:15.128964 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:18:15.128978 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:18:15.128993 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:18:15.129006 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:18:15.129020 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:18:15.129036 kernel: .... node #0, CPUs: #1 Jan 14 13:18:15.129051 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:18:15.129066 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:18:15.129080 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:18:15.129094 kernel: smpboot: Max logical packages: 1 Jan 14 13:18:15.129108 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:18:15.129121 kernel: devtmpfs: initialized Jan 14 13:18:15.129135 kernel: x86/mm: Memory block size: 128MB Jan 14 13:18:15.129152 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:18:15.129167 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:18:15.129181 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:18:15.129195 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:18:15.129209 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:18:15.129222 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:18:15.129237 kernel: audit: type=2000 audit(1736860693.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:18:15.131282 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:18:15.131310 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:18:15.131332 kernel: cpuidle: using governor menu Jan 14 13:18:15.131348 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:18:15.131363 kernel: dca service started, version 1.12.1 Jan 14 13:18:15.131378 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:18:15.131394 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:18:15.131409 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:18:15.131425 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:18:15.131440 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:18:15.131454 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:18:15.131473 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:18:15.131488 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:18:15.131503 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:18:15.131518 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:18:15.131534 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:18:15.131549 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:18:15.131564 kernel: ACPI: Interpreter enabled Jan 14 13:18:15.131579 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:18:15.131594 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:18:15.131613 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:18:15.131629 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:18:15.131644 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:18:15.131659 kernel: iommu: Default domain type: Translated Jan 14 13:18:15.131674 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:18:15.131689 kernel: efivars: Registered efivars operations Jan 14 13:18:15.131704 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:18:15.131719 kernel: PCI: System does not support PCI Jan 14 13:18:15.131733 kernel: vgaarb: loaded Jan 14 13:18:15.131752 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:18:15.131767 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:18:15.131782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:18:15.131797 kernel: pnp: PnP ACPI init Jan 14 13:18:15.131811 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:18:15.131826 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:18:15.131842 kernel: NET: Registered PF_INET protocol family Jan 14 13:18:15.131857 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:18:15.131873 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:18:15.131891 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:18:15.131906 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:18:15.131922 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:18:15.131937 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:18:15.131953 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.131968 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.131983 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:18:15.131998 kernel: NET: Registered PF_XDP protocol family Jan 14 13:18:15.132013 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:18:15.132030 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:18:15.132046 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:18:15.132061 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:18:15.132076 kernel: Initialise system trusted keyrings Jan 14 13:18:15.132091 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:18:15.132106 kernel: Key type asymmetric registered Jan 14 13:18:15.132121 kernel: Asymmetric key parser 'x509' registered Jan 14 13:18:15.132135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:18:15.132150 kernel: io scheduler mq-deadline registered Jan 14 13:18:15.132168 kernel: io scheduler kyber registered Jan 14 13:18:15.132183 kernel: io scheduler bfq registered Jan 14 13:18:15.132198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:18:15.132213 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:18:15.132228 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:18:15.132243 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:18:15.132267 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:18:15.132460 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:18:15.132590 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:18:14 UTC (1736860694) Jan 14 13:18:15.132705 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:18:15.132724 kernel: intel_pstate: CPU model not supported Jan 14 13:18:15.132740 kernel: efifb: probing for efifb Jan 14 13:18:15.132755 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:18:15.132771 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:18:15.132786 kernel: efifb: scrolling: redraw Jan 14 13:18:15.132801 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:18:15.132816 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:18:15.132835 kernel: fb0: EFI VGA frame buffer device Jan 14 13:18:15.132850 kernel: pstore: Using crash dump compression: deflate Jan 14 13:18:15.132865 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:18:15.132880 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:18:15.132895 kernel: Segment Routing with IPv6 Jan 14 13:18:15.132909 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:18:15.132925 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:18:15.132939 kernel: Key type dns_resolver registered Jan 14 13:18:15.132954 kernel: IPI shorthand broadcast: enabled Jan 14 13:18:15.132972 kernel: sched_clock: Marking stable (961004100, 50334100)->(1283094400, -271756200) Jan 14 13:18:15.132988 kernel: registered taskstats version 1 Jan 14 13:18:15.133002 kernel: Loading compiled-in X.509 certificates Jan 14 13:18:15.133018 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:18:15.133033 kernel: Key type .fscrypt registered Jan 14 13:18:15.133048 kernel: Key type fscrypt-provisioning registered Jan 14 13:18:15.133063 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:18:15.133078 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:18:15.133096 kernel: ima: No architecture policies found Jan 14 13:18:15.133111 kernel: clk: Disabling unused clocks Jan 14 13:18:15.133126 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:18:15.133141 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:18:15.133155 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:18:15.133171 kernel: Run /init as init process Jan 14 13:18:15.133185 kernel: with arguments: Jan 14 13:18:15.133200 kernel: /init Jan 14 13:18:15.133214 kernel: with environment: Jan 14 13:18:15.133228 kernel: HOME=/ Jan 14 13:18:15.133245 kernel: TERM=linux Jan 14 13:18:15.133281 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:18:15.133295 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:18:15.133310 systemd[1]: Detected virtualization microsoft. Jan 14 13:18:15.133323 systemd[1]: Detected architecture x86-64. Jan 14 13:18:15.133336 systemd[1]: Running in initrd. Jan 14 13:18:15.133348 systemd[1]: No hostname configured, using default hostname. Jan 14 13:18:15.133363 systemd[1]: Hostname set to . Jan 14 13:18:15.133373 systemd[1]: Initializing machine ID from random generator. Jan 14 13:18:15.133381 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:18:15.133390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:18:15.133398 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:18:15.133408 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:18:15.133416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:18:15.133425 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:18:15.133436 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:18:15.133445 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:18:15.133455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:18:15.133464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:18:15.133472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:18:15.133481 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:18:15.133490 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:18:15.133501 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:18:15.133510 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:18:15.133519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:18:15.133528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:18:15.133537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:18:15.133545 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:18:15.133554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:18:15.133562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:18:15.133571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:18:15.133582 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:18:15.133590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:18:15.133598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:18:15.133607 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:18:15.133615 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:18:15.133624 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:18:15.133632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:18:15.133663 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:18:15.133688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:15.133698 systemd-journald[177]: Journal started Jan 14 13:18:15.133721 systemd-journald[177]: Runtime Journal (/run/log/journal/c518949d4a1a46e0a8b53d3b8756831a) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:18:15.150495 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:18:15.149605 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:18:15.154907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:18:15.158123 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:18:15.163802 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:18:15.188393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:18:15.197531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:18:15.204193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:15.211367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:18:15.215874 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:18:15.232643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:18:15.234819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:18:15.260279 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:18:15.264752 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:18:15.267013 kernel: Bridge firewalling registered Jan 14 13:18:15.267536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:18:15.272714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:15.286473 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:18:15.294446 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:18:15.306675 dracut-cmdline[207]: dracut-dracut-053 Jan 14 13:18:15.306675 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:18:15.297777 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:18:15.343497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:18:15.355597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:18:15.392276 kernel: SCSI subsystem initialized Jan 14 13:18:15.404085 systemd-resolved[271]: Positive Trust Anchors: Jan 14 13:18:15.410616 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:18:15.404103 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:18:15.404145 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:18:15.409875 systemd-resolved[271]: Defaulting to hostname 'linux'. Jan 14 13:18:15.443029 kernel: iscsi: registered transport (tcp) Jan 14 13:18:15.415202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:18:15.418296 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:18:15.465554 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:18:15.465656 kernel: QLogic iSCSI HBA Driver Jan 14 13:18:15.502296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:18:15.511428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:18:15.541286 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:18:15.541387 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:18:15.544987 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:18:15.586288 kernel: raid6: avx512x4 gen() 18190 MB/s Jan 14 13:18:15.606284 kernel: raid6: avx512x2 gen() 18310 MB/s Jan 14 13:18:15.625267 kernel: raid6: avx512x1 gen() 18139 MB/s Jan 14 13:18:15.644266 kernel: raid6: avx2x4 gen() 18099 MB/s Jan 14 13:18:15.663270 kernel: raid6: avx2x2 gen() 18229 MB/s Jan 14 13:18:15.683194 kernel: raid6: avx2x1 gen() 13536 MB/s Jan 14 13:18:15.683237 kernel: raid6: using algorithm avx512x2 gen() 18310 MB/s Jan 14 13:18:15.704344 kernel: raid6: .... xor() 28496 MB/s, rmw enabled Jan 14 13:18:15.704414 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:18:15.728287 kernel: xor: automatically using best checksumming function avx Jan 14 13:18:15.874281 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:18:15.884642 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:18:15.893558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:18:15.918028 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:18:15.922665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:18:15.942485 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:18:15.956094 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 14 13:18:15.985789 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:18:16.002576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:18:16.045743 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:18:16.064528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:18:16.093596 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:18:16.101880 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:18:16.109031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:18:16.116278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:18:16.126278 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:18:16.132414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:18:16.157162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:18:16.157348 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:16.170570 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:18:16.190582 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:18:16.190619 kernel: AES CTR mode by8 optimization enabled Jan 14 13:18:16.190646 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:18:16.174133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:18:16.174366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:16.187725 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:16.209295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:16.217071 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:18:16.229926 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:18:16.229958 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:18:16.242515 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:18:16.247828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:18:16.255422 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:18:16.250345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:16.263272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:16.279289 kernel: PTP clock support registered Jan 14 13:18:16.279320 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:18:16.289979 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:18:16.290034 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:18:16.297271 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:18:16.312267 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:18:16.320484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:16.335057 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:18:16.335135 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:18:16.343793 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:18:16.343901 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:18:16.344104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:18:16.349888 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:18:17.054708 systemd-resolved[271]: Clock change detected. Flushing caches. Jan 14 13:18:17.061664 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:18:17.065842 kernel: scsi host1: storvsc_host_t Jan 14 13:18:17.065947 kernel: scsi host0: storvsc_host_t Jan 14 13:18:17.072555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:17.081970 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:18:17.082034 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:18:17.104101 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:18:17.106635 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:18:17.106661 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:18:17.117919 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:18:17.135751 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:18:17.135963 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:18:17.136132 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:18:17.136307 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:18:17.136482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:17.136503 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:18:17.254666 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: VF slot 1 added Jan 14 13:18:17.265144 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:18:17.265207 kernel: hv_pci 670cc21e-db54-4862-8c56-d82ca5bff970: PCI VMBus probing: Using version 0x10004 Jan 14 13:18:17.310444 kernel: hv_pci 670cc21e-db54-4862-8c56-d82ca5bff970: PCI host bridge to bus db54:00 Jan 14 13:18:17.310657 kernel: pci_bus db54:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:18:17.311666 kernel: pci_bus db54:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:18:17.311830 kernel: pci db54:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:18:17.312034 kernel: pci db54:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:18:17.312212 kernel: pci db54:00:02.0: enabling Extended Tags Jan 14 13:18:17.312386 kernel: pci db54:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at db54:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:18:17.312557 kernel: pci_bus db54:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:18:17.312726 kernel: pci db54:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:18:17.483365 kernel: mlx5_core db54:00:02.0: enabling device (0000 -> 0002) Jan 14 13:18:17.747012 kernel: mlx5_core db54:00:02.0: firmware version: 14.30.5000 Jan 14 13:18:17.747249 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (439) Jan 14 13:18:17.747272 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (446) Jan 14 13:18:17.747293 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:17.747313 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:17.747333 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: VF registering: eth1 Jan 14 13:18:17.747515 kernel: mlx5_core db54:00:02.0 eth1: joined to eth0 Jan 14 13:18:17.748793 kernel: mlx5_core db54:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:18:17.547157 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:18:17.635420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:18:17.663567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:18:17.672008 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:18:17.765739 kernel: mlx5_core db54:00:02.0 enP56148s1: renamed from eth1 Jan 14 13:18:17.675774 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:18:17.691968 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:18:18.734640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:18.735344 disk-uuid[599]: The operation has completed successfully. Jan 14 13:18:18.825739 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:18:18.825857 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:18:18.839782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:18:18.846171 sh[686]: Success Jan 14 13:18:18.873721 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:18:19.084172 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:18:19.090021 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:18:19.100739 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:18:19.117630 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:18:19.117694 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:19.123443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:18:19.126388 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:18:19.132383 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:18:19.396314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:18:19.402472 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:18:19.411815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:18:19.417920 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:18:19.434035 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:19.434102 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:19.434118 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:18:19.458347 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:18:19.472936 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:19.472502 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:18:19.482282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:18:19.491905 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:18:19.537069 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:18:19.547846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:18:19.567790 systemd-networkd[870]: lo: Link UP Jan 14 13:18:19.567800 systemd-networkd[870]: lo: Gained carrier Jan 14 13:18:19.569941 systemd-networkd[870]: Enumeration completed Jan 14 13:18:19.570471 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:18:19.572410 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:19.572415 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:18:19.576923 systemd[1]: Reached target network.target - Network. Jan 14 13:18:19.640636 kernel: mlx5_core db54:00:02.0 enP56148s1: Link up Jan 14 13:18:19.675637 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: Data path switched to VF: enP56148s1 Jan 14 13:18:19.676547 systemd-networkd[870]: enP56148s1: Link UP Jan 14 13:18:19.676710 systemd-networkd[870]: eth0: Link UP Jan 14 13:18:19.676933 systemd-networkd[870]: eth0: Gained carrier Jan 14 13:18:19.676949 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:19.682874 systemd-networkd[870]: enP56148s1: Gained carrier Jan 14 13:18:19.720684 systemd-networkd[870]: eth0: DHCPv4 address 10.200.4.47/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:18:20.258469 ignition[801]: Ignition 2.20.0 Jan 14 13:18:20.258485 ignition[801]: Stage: fetch-offline Jan 14 13:18:20.258537 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.258548 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.258695 ignition[801]: parsed url from cmdline: "" Jan 14 13:18:20.258700 ignition[801]: no config URL provided Jan 14 13:18:20.258708 ignition[801]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:18:20.258720 ignition[801]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:18:20.258727 ignition[801]: failed to fetch config: resource requires networking Jan 14 13:18:20.258974 ignition[801]: Ignition finished successfully Jan 14 13:18:20.280496 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:18:20.290852 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:18:20.308284 ignition[879]: Ignition 2.20.0 Jan 14 13:18:20.308296 ignition[879]: Stage: fetch Jan 14 13:18:20.308522 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.308535 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.308670 ignition[879]: parsed url from cmdline: "" Jan 14 13:18:20.308675 ignition[879]: no config URL provided Jan 14 13:18:20.308683 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:18:20.308690 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:18:20.308717 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:18:20.412723 ignition[879]: GET result: OK Jan 14 13:18:20.412828 ignition[879]: config has been read from IMDS userdata Jan 14 13:18:20.412864 ignition[879]: parsing config with SHA512: b533e0e0df585f9306d40441cd9c5f8559dcb69a72643f0bf23899077afbab7c9ea45f26008013b8a46b347bbecdbd7882c6d5cad423e2d297b176312f356d1e Jan 14 13:18:20.418175 unknown[879]: fetched base config from "system" Jan 14 13:18:20.418791 ignition[879]: fetch: fetch complete Jan 14 13:18:20.418197 unknown[879]: fetched base config from "system" Jan 14 13:18:20.418797 ignition[879]: fetch: fetch passed Jan 14 13:18:20.418207 unknown[879]: fetched user config from "azure" Jan 14 13:18:20.418857 ignition[879]: Ignition finished successfully Jan 14 13:18:20.423771 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:18:20.440848 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:18:20.457340 ignition[885]: Ignition 2.20.0 Jan 14 13:18:20.457353 ignition[885]: Stage: kargs Jan 14 13:18:20.457571 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.457585 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.466543 ignition[885]: kargs: kargs passed Jan 14 13:18:20.466625 ignition[885]: Ignition finished successfully Jan 14 13:18:20.471010 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:18:20.481802 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:18:20.495499 ignition[891]: Ignition 2.20.0 Jan 14 13:18:20.495511 ignition[891]: Stage: disks Jan 14 13:18:20.495754 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.495768 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.496656 ignition[891]: disks: disks passed Jan 14 13:18:20.496705 ignition[891]: Ignition finished successfully Jan 14 13:18:20.506260 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:18:20.514920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:18:20.518106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:18:20.527422 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:18:20.532802 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:18:20.539032 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:18:20.550809 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:18:20.604062 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:18:20.609115 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:18:20.623708 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:18:20.712952 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:18:20.713600 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:18:20.716640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:18:20.752733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:18:20.761794 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:18:20.773627 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (910) Jan 14 13:18:20.774681 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:18:20.794172 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:20.794208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:20.794228 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:18:20.778261 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:18:20.778308 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:18:20.804946 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:18:20.813691 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:18:20.815848 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:18:20.823999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:18:21.371600 coreos-metadata[912]: Jan 14 13:18:21.371 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:18:21.378871 coreos-metadata[912]: Jan 14 13:18:21.378 INFO Fetch successful Jan 14 13:18:21.381932 coreos-metadata[912]: Jan 14 13:18:21.381 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:18:21.399025 coreos-metadata[912]: Jan 14 13:18:21.398 INFO Fetch successful Jan 14 13:18:21.406245 coreos-metadata[912]: Jan 14 13:18:21.406 INFO wrote hostname ci-4152.2.0-a-ae9609fe4e to /sysroot/etc/hostname Jan 14 13:18:21.411779 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:18:21.416107 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:18:21.422322 systemd-networkd[870]: eth0: Gained IPv6LL Jan 14 13:18:21.433887 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:18:21.442752 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:18:21.448683 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:18:21.548754 systemd-networkd[870]: enP56148s1: Gained IPv6LL Jan 14 13:18:22.287101 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:18:22.305713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:18:22.312779 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:18:22.323619 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:18:22.329738 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:22.354085 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:18:22.363438 ignition[1033]: INFO : Ignition 2.20.0 Jan 14 13:18:22.363438 ignition[1033]: INFO : Stage: mount Jan 14 13:18:22.367758 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:22.367758 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:22.374944 ignition[1033]: INFO : mount: mount passed Jan 14 13:18:22.376986 ignition[1033]: INFO : Ignition finished successfully Jan 14 13:18:22.377501 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:18:22.393712 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:18:22.400893 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:18:22.418736 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 14 13:18:22.418785 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:22.422623 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:22.427111 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:18:22.434525 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:18:22.435099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:18:22.463504 ignition[1062]: INFO : Ignition 2.20.0 Jan 14 13:18:22.463504 ignition[1062]: INFO : Stage: files Jan 14 13:18:22.467786 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:22.467786 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:22.467786 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:18:22.478656 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:18:22.478656 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:18:22.534385 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:18:22.539720 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:18:22.539720 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:18:22.534918 unknown[1062]: wrote ssh authorized keys file for user: core Jan 14 13:18:22.551444 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:18:22.557685 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:18:22.586073 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:18:22.796270 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:18:22.796270 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:18:22.807673 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:18:23.283098 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:18:23.374001 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:18:23.871696 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:18:24.283033 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:24.283033 ignition[1062]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:18:24.301134 ignition[1062]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: files passed Jan 14 13:18:24.306910 ignition[1062]: INFO : Ignition finished successfully Jan 14 13:18:24.313166 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:18:24.336765 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:18:24.351794 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:18:24.359371 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:18:24.359508 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:18:24.375163 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:18:24.375163 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:18:24.368953 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:18:24.392459 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:18:24.375707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:18:24.400944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:18:24.432709 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:18:24.432826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:18:24.441324 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:18:24.448571 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:18:24.453898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:18:24.462869 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:18:24.477943 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:18:24.487818 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:18:24.502533 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:18:24.509080 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:18:24.515499 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:18:24.517878 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:18:24.518028 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:18:24.521831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:18:24.526158 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:18:24.530983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:18:24.536814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:18:24.550198 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:18:24.551354 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:18:24.551745 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:18:24.552197 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:18:24.552703 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:18:24.553089 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:18:24.553488 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:18:24.553670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:18:24.554390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:18:24.554836 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:18:24.555207 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:18:24.574359 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:18:24.580588 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:18:24.580781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:18:24.586525 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:18:24.586683 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:18:24.592502 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:18:24.650962 ignition[1115]: INFO : Ignition 2.20.0 Jan 14 13:18:24.650962 ignition[1115]: INFO : Stage: umount Jan 14 13:18:24.650962 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:24.650962 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:24.650962 ignition[1115]: INFO : umount: umount passed Jan 14 13:18:24.650962 ignition[1115]: INFO : Ignition finished successfully Jan 14 13:18:24.592671 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:18:24.597522 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:18:24.597689 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:18:24.622532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:18:24.632710 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:18:24.632964 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:18:24.647742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:18:24.652751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:18:24.654748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:18:24.658599 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:18:24.658763 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:18:24.668859 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:18:24.668965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:18:24.674667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:18:24.674759 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:18:24.701901 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:18:24.701963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:18:24.706399 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:18:24.706465 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:18:24.711797 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:18:24.711855 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:18:24.716818 systemd[1]: Stopped target network.target - Network. Jan 14 13:18:24.727769 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:18:24.727861 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:18:24.733101 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:18:24.740591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:18:24.740664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:18:24.748185 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:18:24.753167 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:18:24.758021 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:18:24.758081 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:18:24.763051 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:18:24.763098 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:18:24.767919 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:18:24.767986 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:18:24.778439 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:18:24.778519 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:18:24.784915 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:18:24.794193 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:18:24.794699 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 14 13:18:24.809113 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:18:24.809774 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:18:24.809886 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:18:24.816824 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:18:24.816938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:18:24.842035 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:18:24.847143 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:18:24.865062 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:18:24.865127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:18:24.871073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:18:24.871150 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:18:24.887714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:18:24.892829 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:18:24.892915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:18:24.901648 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:18:24.901721 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:18:24.910000 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:18:24.910068 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:18:24.918264 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:18:24.918333 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:18:24.924468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:18:24.943696 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:18:24.943899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:18:24.954528 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:18:24.957495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:18:24.960416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:18:24.960460 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:18:24.966000 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:18:24.966064 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:18:24.978686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:18:24.978765 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:18:24.984028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:18:24.984080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:24.996835 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:18:25.005672 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: Data path switched from VF: enP56148s1 Jan 14 13:18:25.005535 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:18:25.005630 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:18:25.008786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:18:25.008854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:25.017973 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:18:25.018075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:18:25.044949 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:18:25.045068 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:18:25.055097 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:18:25.067804 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:18:25.077382 systemd[1]: Switching root. Jan 14 13:18:25.179170 systemd-journald[177]: Journal stopped Jan 14 13:18:15.126637 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:18:15.126670 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:18:15.126679 kernel: BIOS-provided physical RAM map: Jan 14 13:18:15.126690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:18:15.126695 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:18:15.126702 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:18:15.126712 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:18:15.126722 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:18:15.126730 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:18:15.126737 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:18:15.126743 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:18:15.126752 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:18:15.126758 kernel: NX (Execute Disable) protection: active Jan 14 13:18:15.126764 kernel: APIC: Static calls initialized Jan 14 13:18:15.126778 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:18:15.126785 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:18:15.126796 kernel: random: crng init done Jan 14 13:18:15.126803 kernel: secureboot: Secure boot disabled Jan 14 13:18:15.126810 kernel: SMBIOS 3.1.0 present. Jan 14 13:18:15.126821 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:18:15.126828 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:18:15.126835 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:18:15.126841 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:18:15.126848 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:18:15.126859 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:18:15.126868 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:18:15.126875 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:18:15.126884 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:18:15.126893 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:18:15.126900 kernel: tsc: Detected 2593.905 MHz processor Jan 14 13:18:15.126910 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:18:15.126917 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:18:15.126925 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:18:15.126937 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:18:15.126944 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:18:15.126955 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:18:15.126962 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:18:15.126969 kernel: Using GB pages for direct mapping Jan 14 13:18:15.126979 kernel: ACPI: Early table checksum verification disabled Jan 14 13:18:15.126986 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:18:15.127001 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127011 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127020 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:18:15.127029 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:18:15.127036 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127047 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127055 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127064 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127072 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127083 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127090 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:18:15.127098 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:18:15.127109 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:18:15.127117 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:18:15.127129 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:18:15.127137 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:18:15.127149 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:18:15.127157 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:18:15.127168 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:18:15.127176 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:18:15.127185 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:18:15.127194 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:18:15.127202 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:18:15.127213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:18:15.127224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:18:15.127233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:18:15.127242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:18:15.127260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:18:15.127269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:18:15.127276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:18:15.127284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:18:15.127294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:18:15.127302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:18:15.127315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:18:15.127322 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:18:15.127331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:18:15.127340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:18:15.127348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:18:15.127359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:18:15.127367 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:18:15.127375 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:18:15.127385 kernel: Zone ranges: Jan 14 13:18:15.127395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:18:15.127405 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:18:15.127413 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:18:15.127422 kernel: Movable zone start for each node Jan 14 13:18:15.127431 kernel: Early memory node ranges Jan 14 13:18:15.127439 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:18:15.127450 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:18:15.127457 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:18:15.127466 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:18:15.127478 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:18:15.127486 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:18:15.127496 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:18:15.127505 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:18:15.127514 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:18:15.127524 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:18:15.127535 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:18:15.127548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:18:15.127563 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:18:15.127586 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:18:15.127605 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:18:15.127623 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:18:15.127640 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:18:15.127655 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:18:15.127673 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:18:15.127687 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:18:15.127701 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:18:15.127715 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:18:15.127731 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:18:15.127747 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:18:15.127767 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:18:15.127784 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:18:15.127800 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:18:15.127816 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:18:15.127832 kernel: Fallback order for Node 0: 0 Jan 14 13:18:15.127849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:18:15.127870 kernel: Policy zone: Normal Jan 14 13:18:15.127898 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:18:15.127917 kernel: software IO TLB: area num 2. Jan 14 13:18:15.127937 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:18:15.127954 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:18:15.127969 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:18:15.127987 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:18:15.128002 kernel: Dynamic Preempt: voluntary Jan 14 13:18:15.128018 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:18:15.128035 kernel: rcu: RCU event tracing is enabled. Jan 14 13:18:15.128055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:18:15.128076 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:18:15.128093 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:18:15.128111 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:18:15.128129 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:18:15.128148 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:18:15.128169 kernel: Using NULL legacy PIC Jan 14 13:18:15.128185 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:18:15.128204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:18:15.128220 kernel: Console: colour dummy device 80x25 Jan 14 13:18:15.128237 kernel: printk: console [tty1] enabled Jan 14 13:18:15.128262 kernel: printk: console [ttyS0] enabled Jan 14 13:18:15.128278 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:18:15.128290 kernel: ACPI: Core revision 20230628 Jan 14 13:18:15.128302 kernel: Failed to register legacy timer interrupt Jan 14 13:18:15.128314 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:18:15.128331 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:18:15.128344 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:18:15.128358 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:18:15.128371 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:18:15.128384 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:18:15.128397 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:18:15.128410 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:18:15.128424 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:18:15.128438 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 14 13:18:15.128456 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:18:15.128470 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:18:15.128483 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:18:15.128497 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:18:15.128512 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:18:15.128527 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:18:15.128542 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:18:15.128556 kernel: RETBleed: Vulnerable Jan 14 13:18:15.128571 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:18:15.128585 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:18:15.128602 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:18:15.128616 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:18:15.128629 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:18:15.128642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:18:15.128655 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:18:15.128668 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:18:15.128682 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:18:15.128696 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:18:15.128710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:18:15.128723 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:18:15.128736 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:18:15.128753 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:18:15.128766 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:18:15.128781 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:18:15.128794 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:18:15.128808 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:18:15.128822 kernel: landlock: Up and running. Jan 14 13:18:15.128834 kernel: SELinux: Initializing. Jan 14 13:18:15.128847 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.128861 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.128875 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:18:15.128889 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:18:15.128907 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:18:15.128921 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:18:15.128935 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:18:15.128949 kernel: signal: max sigframe size: 3632 Jan 14 13:18:15.128964 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:18:15.128978 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:18:15.128993 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:18:15.129006 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:18:15.129020 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:18:15.129036 kernel: .... node #0, CPUs: #1 Jan 14 13:18:15.129051 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:18:15.129066 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:18:15.129080 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:18:15.129094 kernel: smpboot: Max logical packages: 1 Jan 14 13:18:15.129108 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:18:15.129121 kernel: devtmpfs: initialized Jan 14 13:18:15.129135 kernel: x86/mm: Memory block size: 128MB Jan 14 13:18:15.129152 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:18:15.129167 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:18:15.129181 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:18:15.129195 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:18:15.129209 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:18:15.129222 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:18:15.129237 kernel: audit: type=2000 audit(1736860693.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:18:15.131282 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:18:15.131310 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:18:15.131332 kernel: cpuidle: using governor menu Jan 14 13:18:15.131348 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:18:15.131363 kernel: dca service started, version 1.12.1 Jan 14 13:18:15.131378 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:18:15.131394 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:18:15.131409 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:18:15.131425 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:18:15.131440 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:18:15.131454 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:18:15.131473 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:18:15.131488 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:18:15.131503 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:18:15.131518 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:18:15.131534 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:18:15.131549 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:18:15.131564 kernel: ACPI: Interpreter enabled Jan 14 13:18:15.131579 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:18:15.131594 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:18:15.131613 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:18:15.131629 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:18:15.131644 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:18:15.131659 kernel: iommu: Default domain type: Translated Jan 14 13:18:15.131674 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:18:15.131689 kernel: efivars: Registered efivars operations Jan 14 13:18:15.131704 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:18:15.131719 kernel: PCI: System does not support PCI Jan 14 13:18:15.131733 kernel: vgaarb: loaded Jan 14 13:18:15.131752 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:18:15.131767 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:18:15.131782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:18:15.131797 kernel: pnp: PnP ACPI init Jan 14 13:18:15.131811 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:18:15.131826 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:18:15.131842 kernel: NET: Registered PF_INET protocol family Jan 14 13:18:15.131857 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:18:15.131873 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:18:15.131891 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:18:15.131906 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:18:15.131922 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:18:15.131937 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:18:15.131953 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.131968 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:18:15.131983 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:18:15.131998 kernel: NET: Registered PF_XDP protocol family Jan 14 13:18:15.132013 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:18:15.132030 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:18:15.132046 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:18:15.132061 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:18:15.132076 kernel: Initialise system trusted keyrings Jan 14 13:18:15.132091 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:18:15.132106 kernel: Key type asymmetric registered Jan 14 13:18:15.132121 kernel: Asymmetric key parser 'x509' registered Jan 14 13:18:15.132135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:18:15.132150 kernel: io scheduler mq-deadline registered Jan 14 13:18:15.132168 kernel: io scheduler kyber registered Jan 14 13:18:15.132183 kernel: io scheduler bfq registered Jan 14 13:18:15.132198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:18:15.132213 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:18:15.132228 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:18:15.132243 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:18:15.132267 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:18:15.132460 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:18:15.132590 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:18:14 UTC (1736860694) Jan 14 13:18:15.132705 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:18:15.132724 kernel: intel_pstate: CPU model not supported Jan 14 13:18:15.132740 kernel: efifb: probing for efifb Jan 14 13:18:15.132755 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:18:15.132771 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:18:15.132786 kernel: efifb: scrolling: redraw Jan 14 13:18:15.132801 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:18:15.132816 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:18:15.132835 kernel: fb0: EFI VGA frame buffer device Jan 14 13:18:15.132850 kernel: pstore: Using crash dump compression: deflate Jan 14 13:18:15.132865 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:18:15.132880 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:18:15.132895 kernel: Segment Routing with IPv6 Jan 14 13:18:15.132909 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:18:15.132925 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:18:15.132939 kernel: Key type dns_resolver registered Jan 14 13:18:15.132954 kernel: IPI shorthand broadcast: enabled Jan 14 13:18:15.132972 kernel: sched_clock: Marking stable (961004100, 50334100)->(1283094400, -271756200) Jan 14 13:18:15.132988 kernel: registered taskstats version 1 Jan 14 13:18:15.133002 kernel: Loading compiled-in X.509 certificates Jan 14 13:18:15.133018 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:18:15.133033 kernel: Key type .fscrypt registered Jan 14 13:18:15.133048 kernel: Key type fscrypt-provisioning registered Jan 14 13:18:15.133063 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:18:15.133078 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:18:15.133096 kernel: ima: No architecture policies found Jan 14 13:18:15.133111 kernel: clk: Disabling unused clocks Jan 14 13:18:15.133126 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:18:15.133141 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:18:15.133155 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:18:15.133171 kernel: Run /init as init process Jan 14 13:18:15.133185 kernel: with arguments: Jan 14 13:18:15.133200 kernel: /init Jan 14 13:18:15.133214 kernel: with environment: Jan 14 13:18:15.133228 kernel: HOME=/ Jan 14 13:18:15.133245 kernel: TERM=linux Jan 14 13:18:15.133281 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:18:15.133295 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:18:15.133310 systemd[1]: Detected virtualization microsoft. Jan 14 13:18:15.133323 systemd[1]: Detected architecture x86-64. Jan 14 13:18:15.133336 systemd[1]: Running in initrd. Jan 14 13:18:15.133348 systemd[1]: No hostname configured, using default hostname. Jan 14 13:18:15.133363 systemd[1]: Hostname set to . Jan 14 13:18:15.133373 systemd[1]: Initializing machine ID from random generator. Jan 14 13:18:15.133381 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:18:15.133390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:18:15.133398 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:18:15.133408 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:18:15.133416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:18:15.133425 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:18:15.133436 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:18:15.133445 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:18:15.133455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:18:15.133464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:18:15.133472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:18:15.133481 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:18:15.133490 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:18:15.133501 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:18:15.133510 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:18:15.133519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:18:15.133528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:18:15.133537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:18:15.133545 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:18:15.133554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:18:15.133562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:18:15.133571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:18:15.133582 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:18:15.133590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:18:15.133598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:18:15.133607 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:18:15.133615 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:18:15.133624 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:18:15.133632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:18:15.133663 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:18:15.133688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:15.133698 systemd-journald[177]: Journal started Jan 14 13:18:15.133721 systemd-journald[177]: Runtime Journal (/run/log/journal/c518949d4a1a46e0a8b53d3b8756831a) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:18:15.150495 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:18:15.149605 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:18:15.154907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:18:15.158123 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:18:15.163802 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:18:15.188393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:18:15.197531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:18:15.204193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:15.211367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:18:15.215874 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:18:15.232643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:18:15.234819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:18:15.260279 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:18:15.264752 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:18:15.267013 kernel: Bridge firewalling registered Jan 14 13:18:15.267536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:18:15.272714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:15.286473 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:18:15.294446 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:18:15.306675 dracut-cmdline[207]: dracut-dracut-053 Jan 14 13:18:15.306675 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:18:15.297777 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:18:15.343497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:18:15.355597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:18:15.392276 kernel: SCSI subsystem initialized Jan 14 13:18:15.404085 systemd-resolved[271]: Positive Trust Anchors: Jan 14 13:18:15.410616 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:18:15.404103 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:18:15.404145 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:18:15.409875 systemd-resolved[271]: Defaulting to hostname 'linux'. Jan 14 13:18:15.443029 kernel: iscsi: registered transport (tcp) Jan 14 13:18:15.415202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:18:15.418296 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:18:15.465554 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:18:15.465656 kernel: QLogic iSCSI HBA Driver Jan 14 13:18:15.502296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:18:15.511428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:18:15.541286 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:18:15.541387 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:18:15.544987 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:18:15.586288 kernel: raid6: avx512x4 gen() 18190 MB/s Jan 14 13:18:15.606284 kernel: raid6: avx512x2 gen() 18310 MB/s Jan 14 13:18:15.625267 kernel: raid6: avx512x1 gen() 18139 MB/s Jan 14 13:18:15.644266 kernel: raid6: avx2x4 gen() 18099 MB/s Jan 14 13:18:15.663270 kernel: raid6: avx2x2 gen() 18229 MB/s Jan 14 13:18:15.683194 kernel: raid6: avx2x1 gen() 13536 MB/s Jan 14 13:18:15.683237 kernel: raid6: using algorithm avx512x2 gen() 18310 MB/s Jan 14 13:18:15.704344 kernel: raid6: .... xor() 28496 MB/s, rmw enabled Jan 14 13:18:15.704414 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:18:15.728287 kernel: xor: automatically using best checksumming function avx Jan 14 13:18:15.874281 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:18:15.884642 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:18:15.893558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:18:15.918028 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:18:15.922665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:18:15.942485 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:18:15.956094 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 14 13:18:15.985789 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:18:16.002576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:18:16.045743 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:18:16.064528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:18:16.093596 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:18:16.101880 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:18:16.109031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:18:16.116278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:18:16.126278 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:18:16.132414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:18:16.157162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:18:16.157348 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:16.170570 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:18:16.190582 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:18:16.190619 kernel: AES CTR mode by8 optimization enabled Jan 14 13:18:16.190646 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:18:16.174133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:18:16.174366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:16.187725 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:16.209295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:16.217071 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:18:16.229926 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:18:16.229958 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:18:16.242515 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:18:16.247828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:18:16.255422 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:18:16.250345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:16.263272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:16.279289 kernel: PTP clock support registered Jan 14 13:18:16.279320 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:18:16.289979 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:18:16.290034 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:18:16.297271 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:18:16.312267 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:18:16.320484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:16.335057 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:18:16.335135 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:18:16.343793 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:18:16.343901 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:18:16.344104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:18:16.349888 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:18:17.054708 systemd-resolved[271]: Clock change detected. Flushing caches. Jan 14 13:18:17.061664 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:18:17.065842 kernel: scsi host1: storvsc_host_t Jan 14 13:18:17.065947 kernel: scsi host0: storvsc_host_t Jan 14 13:18:17.072555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:17.081970 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:18:17.082034 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:18:17.104101 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:18:17.106635 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:18:17.106661 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:18:17.117919 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:18:17.135751 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:18:17.135963 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:18:17.136132 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:18:17.136307 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:18:17.136482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:17.136503 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:18:17.254666 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: VF slot 1 added Jan 14 13:18:17.265144 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:18:17.265207 kernel: hv_pci 670cc21e-db54-4862-8c56-d82ca5bff970: PCI VMBus probing: Using version 0x10004 Jan 14 13:18:17.310444 kernel: hv_pci 670cc21e-db54-4862-8c56-d82ca5bff970: PCI host bridge to bus db54:00 Jan 14 13:18:17.310657 kernel: pci_bus db54:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:18:17.311666 kernel: pci_bus db54:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:18:17.311830 kernel: pci db54:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:18:17.312034 kernel: pci db54:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:18:17.312212 kernel: pci db54:00:02.0: enabling Extended Tags Jan 14 13:18:17.312386 kernel: pci db54:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at db54:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:18:17.312557 kernel: pci_bus db54:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:18:17.312726 kernel: pci db54:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:18:17.483365 kernel: mlx5_core db54:00:02.0: enabling device (0000 -> 0002) Jan 14 13:18:17.747012 kernel: mlx5_core db54:00:02.0: firmware version: 14.30.5000 Jan 14 13:18:17.747249 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (439) Jan 14 13:18:17.747272 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (446) Jan 14 13:18:17.747293 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:17.747313 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:17.747333 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: VF registering: eth1 Jan 14 13:18:17.747515 kernel: mlx5_core db54:00:02.0 eth1: joined to eth0 Jan 14 13:18:17.748793 kernel: mlx5_core db54:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:18:17.547157 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:18:17.635420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:18:17.663567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:18:17.672008 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:18:17.765739 kernel: mlx5_core db54:00:02.0 enP56148s1: renamed from eth1 Jan 14 13:18:17.675774 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:18:17.691968 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:18:18.734640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:18:18.735344 disk-uuid[599]: The operation has completed successfully. Jan 14 13:18:18.825739 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:18:18.825857 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:18:18.839782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:18:18.846171 sh[686]: Success Jan 14 13:18:18.873721 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:18:19.084172 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:18:19.090021 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:18:19.100739 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:18:19.117630 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:18:19.117694 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:19.123443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:18:19.126388 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:18:19.132383 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:18:19.396314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:18:19.402472 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:18:19.411815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:18:19.417920 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:18:19.434035 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:19.434102 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:19.434118 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:18:19.458347 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:18:19.472936 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:19.472502 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:18:19.482282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:18:19.491905 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:18:19.537069 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:18:19.547846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:18:19.567790 systemd-networkd[870]: lo: Link UP Jan 14 13:18:19.567800 systemd-networkd[870]: lo: Gained carrier Jan 14 13:18:19.569941 systemd-networkd[870]: Enumeration completed Jan 14 13:18:19.570471 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:18:19.572410 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:19.572415 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:18:19.576923 systemd[1]: Reached target network.target - Network. Jan 14 13:18:19.640636 kernel: mlx5_core db54:00:02.0 enP56148s1: Link up Jan 14 13:18:19.675637 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: Data path switched to VF: enP56148s1 Jan 14 13:18:19.676547 systemd-networkd[870]: enP56148s1: Link UP Jan 14 13:18:19.676710 systemd-networkd[870]: eth0: Link UP Jan 14 13:18:19.676933 systemd-networkd[870]: eth0: Gained carrier Jan 14 13:18:19.676949 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:19.682874 systemd-networkd[870]: enP56148s1: Gained carrier Jan 14 13:18:19.720684 systemd-networkd[870]: eth0: DHCPv4 address 10.200.4.47/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:18:20.258469 ignition[801]: Ignition 2.20.0 Jan 14 13:18:20.258485 ignition[801]: Stage: fetch-offline Jan 14 13:18:20.258537 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.258548 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.258695 ignition[801]: parsed url from cmdline: "" Jan 14 13:18:20.258700 ignition[801]: no config URL provided Jan 14 13:18:20.258708 ignition[801]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:18:20.258720 ignition[801]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:18:20.258727 ignition[801]: failed to fetch config: resource requires networking Jan 14 13:18:20.258974 ignition[801]: Ignition finished successfully Jan 14 13:18:20.280496 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:18:20.290852 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:18:20.308284 ignition[879]: Ignition 2.20.0 Jan 14 13:18:20.308296 ignition[879]: Stage: fetch Jan 14 13:18:20.308522 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.308535 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.308670 ignition[879]: parsed url from cmdline: "" Jan 14 13:18:20.308675 ignition[879]: no config URL provided Jan 14 13:18:20.308683 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:18:20.308690 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:18:20.308717 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:18:20.412723 ignition[879]: GET result: OK Jan 14 13:18:20.412828 ignition[879]: config has been read from IMDS userdata Jan 14 13:18:20.412864 ignition[879]: parsing config with SHA512: b533e0e0df585f9306d40441cd9c5f8559dcb69a72643f0bf23899077afbab7c9ea45f26008013b8a46b347bbecdbd7882c6d5cad423e2d297b176312f356d1e Jan 14 13:18:20.418175 unknown[879]: fetched base config from "system" Jan 14 13:18:20.418791 ignition[879]: fetch: fetch complete Jan 14 13:18:20.418197 unknown[879]: fetched base config from "system" Jan 14 13:18:20.418797 ignition[879]: fetch: fetch passed Jan 14 13:18:20.418207 unknown[879]: fetched user config from "azure" Jan 14 13:18:20.418857 ignition[879]: Ignition finished successfully Jan 14 13:18:20.423771 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:18:20.440848 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:18:20.457340 ignition[885]: Ignition 2.20.0 Jan 14 13:18:20.457353 ignition[885]: Stage: kargs Jan 14 13:18:20.457571 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.457585 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.466543 ignition[885]: kargs: kargs passed Jan 14 13:18:20.466625 ignition[885]: Ignition finished successfully Jan 14 13:18:20.471010 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:18:20.481802 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:18:20.495499 ignition[891]: Ignition 2.20.0 Jan 14 13:18:20.495511 ignition[891]: Stage: disks Jan 14 13:18:20.495754 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:20.495768 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:20.496656 ignition[891]: disks: disks passed Jan 14 13:18:20.496705 ignition[891]: Ignition finished successfully Jan 14 13:18:20.506260 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:18:20.514920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:18:20.518106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:18:20.527422 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:18:20.532802 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:18:20.539032 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:18:20.550809 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:18:20.604062 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:18:20.609115 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:18:20.623708 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:18:20.712952 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:18:20.713600 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:18:20.716640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:18:20.752733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:18:20.761794 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:18:20.773627 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (910) Jan 14 13:18:20.774681 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:18:20.794172 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:20.794208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:20.794228 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:18:20.778261 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:18:20.778308 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:18:20.804946 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:18:20.813691 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:18:20.815848 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:18:20.823999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:18:21.371600 coreos-metadata[912]: Jan 14 13:18:21.371 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:18:21.378871 coreos-metadata[912]: Jan 14 13:18:21.378 INFO Fetch successful Jan 14 13:18:21.381932 coreos-metadata[912]: Jan 14 13:18:21.381 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:18:21.399025 coreos-metadata[912]: Jan 14 13:18:21.398 INFO Fetch successful Jan 14 13:18:21.406245 coreos-metadata[912]: Jan 14 13:18:21.406 INFO wrote hostname ci-4152.2.0-a-ae9609fe4e to /sysroot/etc/hostname Jan 14 13:18:21.411779 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:18:21.416107 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:18:21.422322 systemd-networkd[870]: eth0: Gained IPv6LL Jan 14 13:18:21.433887 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:18:21.442752 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:18:21.448683 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:18:21.548754 systemd-networkd[870]: enP56148s1: Gained IPv6LL Jan 14 13:18:22.287101 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:18:22.305713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:18:22.312779 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:18:22.323619 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:18:22.329738 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:22.354085 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:18:22.363438 ignition[1033]: INFO : Ignition 2.20.0 Jan 14 13:18:22.363438 ignition[1033]: INFO : Stage: mount Jan 14 13:18:22.367758 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:22.367758 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:22.374944 ignition[1033]: INFO : mount: mount passed Jan 14 13:18:22.376986 ignition[1033]: INFO : Ignition finished successfully Jan 14 13:18:22.377501 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:18:22.393712 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:18:22.400893 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:18:22.418736 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 14 13:18:22.418785 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:18:22.422623 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:18:22.427111 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:18:22.434525 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:18:22.435099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:18:22.463504 ignition[1062]: INFO : Ignition 2.20.0 Jan 14 13:18:22.463504 ignition[1062]: INFO : Stage: files Jan 14 13:18:22.467786 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:22.467786 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:22.467786 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:18:22.478656 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:18:22.478656 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:18:22.534385 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:18:22.539720 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:18:22.539720 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:18:22.534918 unknown[1062]: wrote ssh authorized keys file for user: core Jan 14 13:18:22.551444 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:18:22.557685 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:18:22.586073 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:18:22.796270 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:18:22.796270 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:18:22.807673 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:18:23.283098 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:18:23.374001 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:18:23.379141 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:23.421471 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:18:23.871696 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:18:24.283033 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:18:24.283033 ignition[1062]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:18:24.301134 ignition[1062]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:18:24.306910 ignition[1062]: INFO : files: files passed Jan 14 13:18:24.306910 ignition[1062]: INFO : Ignition finished successfully Jan 14 13:18:24.313166 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:18:24.336765 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:18:24.351794 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:18:24.359371 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:18:24.359508 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:18:24.375163 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:18:24.375163 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:18:24.368953 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:18:24.392459 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:18:24.375707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:18:24.400944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:18:24.432709 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:18:24.432826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:18:24.441324 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:18:24.448571 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:18:24.453898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:18:24.462869 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:18:24.477943 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:18:24.487818 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:18:24.502533 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:18:24.509080 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:18:24.515499 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:18:24.517878 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:18:24.518028 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:18:24.521831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:18:24.526158 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:18:24.530983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:18:24.536814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:18:24.550198 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:18:24.551354 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:18:24.551745 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:18:24.552197 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:18:24.552703 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:18:24.553089 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:18:24.553488 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:18:24.553670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:18:24.554390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:18:24.554836 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:18:24.555207 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:18:24.574359 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:18:24.580588 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:18:24.580781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:18:24.586525 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:18:24.586683 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:18:24.592502 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:18:24.650962 ignition[1115]: INFO : Ignition 2.20.0 Jan 14 13:18:24.650962 ignition[1115]: INFO : Stage: umount Jan 14 13:18:24.650962 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:18:24.650962 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:18:24.650962 ignition[1115]: INFO : umount: umount passed Jan 14 13:18:24.650962 ignition[1115]: INFO : Ignition finished successfully Jan 14 13:18:24.592671 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:18:24.597522 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:18:24.597689 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:18:24.622532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:18:24.632710 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:18:24.632964 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:18:24.647742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:18:24.652751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:18:24.654748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:18:24.658599 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:18:24.658763 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:18:24.668859 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:18:24.668965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:18:24.674667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:18:24.674759 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:18:24.701901 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:18:24.701963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:18:24.706399 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:18:24.706465 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:18:24.711797 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:18:24.711855 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:18:24.716818 systemd[1]: Stopped target network.target - Network. Jan 14 13:18:24.727769 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:18:24.727861 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:18:24.733101 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:18:24.740591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:18:24.740664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:18:24.748185 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:18:24.753167 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:18:24.758021 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:18:24.758081 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:18:24.763051 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:18:24.763098 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:18:24.767919 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:18:24.767986 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:18:24.778439 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:18:24.778519 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:18:24.784915 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:18:24.794193 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:18:24.794699 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 14 13:18:24.809113 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:18:24.809774 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:18:24.809886 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:18:24.816824 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:18:24.816938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:18:24.842035 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:18:24.847143 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:18:24.865062 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:18:24.865127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:18:24.871073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:18:24.871150 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:18:24.887714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:18:24.892829 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:18:24.892915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:18:24.901648 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:18:24.901721 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:18:24.910000 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:18:24.910068 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:18:24.918264 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:18:24.918333 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:18:24.924468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:18:24.943696 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:18:24.943899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:18:24.954528 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:18:24.957495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:18:24.960416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:18:24.960460 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:18:24.966000 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:18:24.966064 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:18:24.978686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:18:24.978765 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:18:24.984028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:18:24.984080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:18:24.996835 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:18:25.005672 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: Data path switched from VF: enP56148s1 Jan 14 13:18:25.005535 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:18:25.005630 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:18:25.008786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:18:25.008854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:25.017973 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:18:25.018075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:18:25.044949 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:18:25.045068 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:18:25.055097 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:18:25.067804 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:18:25.077382 systemd[1]: Switching root. Jan 14 13:18:25.179170 systemd-journald[177]: Journal stopped Jan 14 13:18:30.682810 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:18:30.682859 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:18:30.682881 kernel: SELinux: policy capability open_perms=1 Jan 14 13:18:30.682896 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:18:30.682914 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:18:30.682932 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:18:30.682949 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:18:30.682970 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:18:30.682990 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:18:30.683006 kernel: audit: type=1403 audit(1736860707.370:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:18:30.683027 systemd[1]: Successfully loaded SELinux policy in 228.074ms. Jan 14 13:18:30.683048 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.115ms. Jan 14 13:18:30.683068 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:18:30.683091 systemd[1]: Detected virtualization microsoft. Jan 14 13:18:30.683116 systemd[1]: Detected architecture x86-64. Jan 14 13:18:30.683136 systemd[1]: Detected first boot. Jan 14 13:18:30.683157 systemd[1]: Hostname set to . Jan 14 13:18:30.683177 systemd[1]: Initializing machine ID from random generator. Jan 14 13:18:30.683196 zram_generator::config[1158]: No configuration found. Jan 14 13:18:30.683226 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:18:30.683245 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:18:30.683267 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:18:30.683286 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:18:30.683309 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:18:30.683330 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:18:30.683348 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:18:30.683377 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:18:30.683397 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:18:30.683420 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:18:30.683446 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:18:30.683467 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:18:30.683489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:18:30.683510 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:18:30.683531 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:18:30.683560 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:18:30.683582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:18:30.683750 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:18:30.683773 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:18:30.683793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:18:30.683814 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:18:30.683849 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:18:30.683868 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:18:30.683896 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:18:30.683918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:18:30.683938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:18:30.683959 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:18:30.683980 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:18:30.684003 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:18:30.684023 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:18:30.684053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:18:30.684074 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:18:30.684097 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:18:30.684119 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:18:30.684141 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:18:30.684169 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:18:30.684193 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:18:30.684219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:30.684241 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:18:30.684269 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:18:30.684291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:18:30.684314 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:18:30.684334 systemd[1]: Reached target machines.target - Containers. Jan 14 13:18:30.684363 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:18:30.684384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:18:30.684406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:18:30.684429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:18:30.684448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:18:30.684463 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:18:30.684479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:18:30.684496 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:18:30.684515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:18:30.684534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:18:30.684550 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:18:30.684566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:18:30.684584 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:18:30.684622 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:18:30.684639 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:18:30.684654 kernel: fuse: init (API version 7.39) Jan 14 13:18:30.684670 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:18:30.684693 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:18:30.684710 kernel: loop: module loaded Jan 14 13:18:30.684752 systemd-journald[1256]: Collecting audit messages is disabled. Jan 14 13:18:30.684787 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:18:30.684811 systemd-journald[1256]: Journal started Jan 14 13:18:30.684846 systemd-journald[1256]: Runtime Journal (/run/log/journal/33987d0b91804ab483433744674d4756) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:18:29.943667 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:18:30.106185 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:18:30.106554 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:18:30.701822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:18:30.705626 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 13:18:30.705672 systemd[1]: Stopped verity-setup.service. Jan 14 13:18:30.711622 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:30.724578 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:18:30.725253 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:18:30.728242 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:18:30.732520 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:18:30.735342 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:18:30.738773 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:18:30.742412 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:18:30.745504 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:18:30.749248 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:18:30.754014 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:18:30.754183 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:18:30.760047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:18:30.760768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:18:30.765271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:18:30.765534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:18:30.770151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:18:30.770322 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:18:30.779950 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:18:30.780105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:18:30.785401 kernel: ACPI: bus type drm_connector registered Jan 14 13:18:30.786816 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:18:30.787006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:18:30.790169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:18:30.793591 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:18:30.798083 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:18:30.820539 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:18:30.832722 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:18:30.847795 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:18:30.851044 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:18:30.851097 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:18:30.856203 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:18:30.868071 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:18:30.877713 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:18:30.887504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:18:30.896415 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:18:30.902800 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:18:30.906833 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:18:30.914845 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:18:30.917967 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:18:30.919159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:18:30.930528 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:18:30.938802 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:18:30.946381 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:18:30.953690 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:18:30.958121 systemd-journald[1256]: Time spent on flushing to /var/log/journal/33987d0b91804ab483433744674d4756 is 41.103ms for 961 entries. Jan 14 13:18:30.958121 systemd-journald[1256]: System Journal (/var/log/journal/33987d0b91804ab483433744674d4756) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:18:31.122915 systemd-journald[1256]: Received client request to flush runtime journal. Jan 14 13:18:31.123081 kernel: loop0: detected capacity change from 0 to 140992 Jan 14 13:18:30.961046 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:18:30.966432 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:18:30.969929 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:18:30.977829 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:18:30.985818 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:18:30.990121 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:18:31.026384 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 14 13:18:31.125131 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:18:31.170261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:18:31.182477 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:18:31.184926 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:18:31.224199 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:18:31.237649 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:18:31.285234 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Jan 14 13:18:31.285257 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Jan 14 13:18:31.290895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:18:31.519635 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:18:31.559635 kernel: loop1: detected capacity change from 0 to 28272 Jan 14 13:18:31.849631 kernel: loop2: detected capacity change from 0 to 211296 Jan 14 13:18:31.900633 kernel: loop3: detected capacity change from 0 to 138184 Jan 14 13:18:32.400635 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:18:32.412715 kernel: loop4: detected capacity change from 0 to 140992 Jan 14 13:18:32.413989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:18:32.431633 kernel: loop5: detected capacity change from 0 to 28272 Jan 14 13:18:32.443633 kernel: loop6: detected capacity change from 0 to 211296 Jan 14 13:18:32.449801 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Jan 14 13:18:32.462627 kernel: loop7: detected capacity change from 0 to 138184 Jan 14 13:18:32.474119 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:18:32.474715 (sd-merge)[1320]: Merged extensions into '/usr'. Jan 14 13:18:32.478140 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:18:32.478156 systemd[1]: Reloading... Jan 14 13:18:32.538640 zram_generator::config[1346]: No configuration found. Jan 14 13:18:32.711016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:18:32.829474 systemd[1]: Reloading finished in 350 ms. Jan 14 13:18:32.865876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:18:32.881179 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:18:32.891212 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:18:32.900032 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:18:32.913818 systemd[1]: Starting ensure-sysext.service... Jan 14 13:18:32.925811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:18:32.932892 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:18:32.980355 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:18:32.983190 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:18:32.985408 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:18:32.987940 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Jan 14 13:18:32.988031 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Jan 14 13:18:33.005453 systemd[1]: Reloading requested from client PID 1435 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:18:33.005636 systemd[1]: Reloading... Jan 14 13:18:33.013687 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:18:33.014986 systemd-tmpfiles[1440]: Skipping /boot Jan 14 13:18:33.033745 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:18:33.042933 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:18:33.052878 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:18:33.052969 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:18:33.059636 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:18:33.060291 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:18:33.060425 systemd-tmpfiles[1440]: Skipping /boot Jan 14 13:18:33.065444 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:18:33.069898 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:18:33.210659 zram_generator::config[1477]: No configuration found. Jan 14 13:18:33.410636 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1413) Jan 14 13:18:33.528517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:18:33.545165 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:18:33.650856 systemd[1]: Reloading finished in 644 ms. Jan 14 13:18:33.671141 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:18:33.702326 systemd[1]: Finished ensure-sysext.service. Jan 14 13:18:33.726689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:18:33.731029 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:33.745858 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:18:33.765897 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:18:33.769405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:18:33.782944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:18:33.790822 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:18:33.801627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:18:33.808249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:18:33.812000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:18:33.818518 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:18:33.825320 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:18:33.835791 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:18:33.838798 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:18:33.844225 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:18:33.850830 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:18:33.858830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:33.866743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:33.868194 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:18:33.873530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:18:33.873765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:18:33.877358 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:18:33.877542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:18:33.881054 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:18:33.881255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:18:33.885287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:18:33.885482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:18:33.909986 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:18:33.913779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:18:33.914205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:18:33.915032 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:18:33.925908 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:18:33.940161 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:18:33.948584 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:18:33.962461 augenrules[1641]: No rules Jan 14 13:18:33.963266 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:18:33.963545 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:18:34.009651 lvm[1629]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:18:34.046048 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:18:34.050148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:18:34.061944 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:18:34.080184 lvm[1650]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:18:34.101486 systemd-resolved[1615]: Positive Trust Anchors: Jan 14 13:18:34.101503 systemd-resolved[1615]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:18:34.101547 systemd-resolved[1615]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:18:34.107893 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:18:34.130374 systemd-resolved[1615]: Using system hostname 'ci-4152.2.0-a-ae9609fe4e'. Jan 14 13:18:34.132383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:18:34.133518 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:18:34.141895 systemd-networkd[1438]: lo: Link UP Jan 14 13:18:34.141904 systemd-networkd[1438]: lo: Gained carrier Jan 14 13:18:34.145363 systemd-networkd[1438]: Enumeration completed Jan 14 13:18:34.145490 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:18:34.146224 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:34.146231 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:18:34.148566 systemd[1]: Reached target network.target - Network. Jan 14 13:18:34.155828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:18:34.205635 kernel: mlx5_core db54:00:02.0 enP56148s1: Link up Jan 14 13:18:34.227723 kernel: hv_netvsc 6045bd0e-bce9-6045-bd0e-bce96045bd0e eth0: Data path switched to VF: enP56148s1 Jan 14 13:18:34.229272 systemd-networkd[1438]: enP56148s1: Link UP Jan 14 13:18:34.229475 systemd-networkd[1438]: eth0: Link UP Jan 14 13:18:34.229480 systemd-networkd[1438]: eth0: Gained carrier Jan 14 13:18:34.229510 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:34.234202 systemd-networkd[1438]: enP56148s1: Gained carrier Jan 14 13:18:34.273705 systemd-networkd[1438]: eth0: DHCPv4 address 10.200.4.47/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:18:34.294534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:34.974572 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:18:34.980313 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:18:35.948885 systemd-networkd[1438]: eth0: Gained IPv6LL Jan 14 13:18:35.952047 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:18:35.955896 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:18:36.076862 systemd-networkd[1438]: enP56148s1: Gained IPv6LL Jan 14 13:18:36.277259 ldconfig[1289]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:18:36.294122 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:18:36.331868 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:18:36.359265 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:18:36.363779 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:18:36.367311 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:18:36.370679 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:18:36.374199 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:18:36.377279 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:18:36.380353 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:18:36.383712 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:18:36.383760 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:18:36.386019 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:18:36.389119 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:18:36.393761 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:18:36.405746 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:18:36.409594 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:18:36.413439 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:18:36.416442 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:18:36.419352 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:18:36.419391 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:18:36.437764 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:18:36.443780 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:18:36.454809 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:18:36.458905 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:18:36.468793 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:18:36.476149 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:18:36.479217 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:18:36.479277 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:18:36.483913 jq[1668]: false Jan 14 13:18:36.489865 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:18:36.492935 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:18:36.494372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:18:36.505805 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:18:36.510380 KVP[1670]: KVP starting; pid is:1670 Jan 14 13:18:36.513541 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:18:36.514983 (chronyd)[1664]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:18:36.524327 chronyd[1679]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:18:36.525800 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:18:36.532149 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:18:36.532272 KVP[1670]: KVP LIC Version: 3.1 Jan 14 13:18:36.542813 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:18:36.549799 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:18:36.558685 chronyd[1679]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:18:36.558955 chronyd[1679]: Loaded seccomp filter (level 2) Jan 14 13:18:36.562419 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:18:36.569244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:18:36.570491 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:18:36.579785 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:18:36.583954 extend-filesystems[1669]: Found loop4 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found loop5 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found loop6 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found loop7 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda1 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda2 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda3 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found usr Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda4 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda6 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda7 Jan 14 13:18:36.589753 extend-filesystems[1669]: Found sda9 Jan 14 13:18:36.589753 extend-filesystems[1669]: Checking size of /dev/sda9 Jan 14 13:18:36.593707 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:18:36.611747 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:18:36.628981 jq[1690]: true Jan 14 13:18:36.635283 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:18:36.635516 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:18:36.638950 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:18:36.639201 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:18:36.649111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:18:36.658336 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:18:36.658933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:18:36.698785 extend-filesystems[1669]: Old size kept for /dev/sda9 Jan 14 13:18:36.707393 extend-filesystems[1669]: Found sr0 Jan 14 13:18:36.711033 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:18:36.711282 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:18:36.718115 update_engine[1688]: I20250114 13:18:36.718022 1688 main.cc:92] Flatcar Update Engine starting Jan 14 13:18:36.728973 (ntainerd)[1713]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:18:36.736015 jq[1706]: true Jan 14 13:18:36.747635 tar[1704]: linux-amd64/helm Jan 14 13:18:36.748759 dbus-daemon[1667]: [system] SELinux support is enabled Jan 14 13:18:36.748966 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:18:36.761488 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:18:36.761543 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:18:36.769816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:18:36.769855 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:18:36.774536 update_engine[1688]: I20250114 13:18:36.773339 1688 update_check_scheduler.cc:74] Next update check in 3m21s Jan 14 13:18:36.783704 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:18:36.793817 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:18:36.846388 systemd-logind[1684]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:18:36.850414 systemd-logind[1684]: New seat seat0. Jan 14 13:18:36.855458 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:18:36.863760 sshd_keygen[1697]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:18:36.894065 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1749) Jan 14 13:18:36.896630 coreos-metadata[1666]: Jan 14 13:18:36.894 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:18:36.897326 coreos-metadata[1666]: Jan 14 13:18:36.897 INFO Fetch successful Jan 14 13:18:36.897597 coreos-metadata[1666]: Jan 14 13:18:36.897 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:18:36.903838 coreos-metadata[1666]: Jan 14 13:18:36.903 INFO Fetch successful Jan 14 13:18:36.904408 coreos-metadata[1666]: Jan 14 13:18:36.904 INFO Fetching http://168.63.129.16/machine/e1b0ebe2-7b96-490b-86b4-fd7ca3747803/8da1659d%2D6473%2D45c5%2D9dea%2D552e20554dec.%5Fci%2D4152.2.0%2Da%2Dae9609fe4e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:18:36.906846 coreos-metadata[1666]: Jan 14 13:18:36.906 INFO Fetch successful Jan 14 13:18:36.906846 coreos-metadata[1666]: Jan 14 13:18:36.906 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:18:36.917139 coreos-metadata[1666]: Jan 14 13:18:36.917 INFO Fetch successful Jan 14 13:18:36.975172 bash[1750]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:18:36.977597 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:18:36.994455 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:18:37.003887 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:18:37.011990 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:18:37.074919 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:18:37.090937 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:18:37.099842 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:18:37.107120 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:18:37.107359 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:18:37.156027 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:18:37.234023 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:18:37.238229 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:18:37.255509 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:18:37.270163 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:18:37.275358 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:18:37.297929 locksmithd[1733]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:18:37.627639 tar[1704]: linux-amd64/LICENSE Jan 14 13:18:37.627639 tar[1704]: linux-amd64/README.md Jan 14 13:18:37.640631 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:18:37.924769 containerd[1713]: time="2025-01-14T13:18:37.924333100Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:18:37.960502 containerd[1713]: time="2025-01-14T13:18:37.960213400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:37.962179 containerd[1713]: time="2025-01-14T13:18:37.962129300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:37.962179 containerd[1713]: time="2025-01-14T13:18:37.962162500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:18:37.962179 containerd[1713]: time="2025-01-14T13:18:37.962184400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:18:37.962787 containerd[1713]: time="2025-01-14T13:18:37.962752200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:18:37.962787 containerd[1713]: time="2025-01-14T13:18:37.962784800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:37.962913 containerd[1713]: time="2025-01-14T13:18:37.962866600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:37.962913 containerd[1713]: time="2025-01-14T13:18:37.962885000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963126 containerd[1713]: time="2025-01-14T13:18:37.963098500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963126 containerd[1713]: time="2025-01-14T13:18:37.963122100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963218 containerd[1713]: time="2025-01-14T13:18:37.963142400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963218 containerd[1713]: time="2025-01-14T13:18:37.963157000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963289 containerd[1713]: time="2025-01-14T13:18:37.963262000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963537 containerd[1713]: time="2025-01-14T13:18:37.963503100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963691 containerd[1713]: time="2025-01-14T13:18:37.963668100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:37.963691 containerd[1713]: time="2025-01-14T13:18:37.963688000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:18:37.963818 containerd[1713]: time="2025-01-14T13:18:37.963798000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:18:37.963888 containerd[1713]: time="2025-01-14T13:18:37.963871400Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:18:37.976987 containerd[1713]: time="2025-01-14T13:18:37.976300500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:18:37.976987 containerd[1713]: time="2025-01-14T13:18:37.976379200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:18:37.976987 containerd[1713]: time="2025-01-14T13:18:37.976402000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:18:37.976987 containerd[1713]: time="2025-01-14T13:18:37.976425400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:18:37.976987 containerd[1713]: time="2025-01-14T13:18:37.976445600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:18:37.976987 containerd[1713]: time="2025-01-14T13:18:37.976632600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:18:37.976987 containerd[1713]: time="2025-01-14T13:18:37.976925700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977039600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977061000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977081300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977099400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977118800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977138800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977158900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977182000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977204800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977222500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977239300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977268700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977287300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977305 containerd[1713]: time="2025-01-14T13:18:37.977305500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977335700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977355000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977375100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977392200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977410400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977429100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977449300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977466200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977482100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977499900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977519600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977558800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977579700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.977834 containerd[1713]: time="2025-01-14T13:18:37.977597000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977676200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977703200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977719800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977739600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977754300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977774600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977788200Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:18:37.979513 containerd[1713]: time="2025-01-14T13:18:37.977803300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.978209600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.978280000Z" level=info msg="Connect containerd service" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.978333200Z" level=info msg="using legacy CRI server" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.978343000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.978511100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.979622500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980384600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980436600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980471700Z" level=info msg="Start subscribing containerd event" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980516400Z" level=info msg="Start recovering state" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980584500Z" level=info msg="Start event monitor" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980627400Z" level=info msg="Start snapshots syncer" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980653600Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980670400Z" level=info msg="Start streaming server" Jan 14 13:18:37.981139 containerd[1713]: time="2025-01-14T13:18:37.980739600Z" level=info msg="containerd successfully booted in 0.057514s" Jan 14 13:18:37.982249 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:18:38.221811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:18:38.228480 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:18:38.233084 systemd[1]: Startup finished in 829ms (firmware) + 26.110s (loader) + 1.114s (kernel) + 11.720s (initrd) + 11.089s (userspace) = 50.864s. Jan 14 13:18:38.239192 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:18:38.528955 login[1839]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:18:38.531555 login[1840]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:18:38.543742 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:18:38.555927 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:18:38.559858 systemd-logind[1684]: New session 1 of user core. Jan 14 13:18:38.571966 systemd-logind[1684]: New session 2 of user core. Jan 14 13:18:38.580472 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:18:38.586924 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:18:38.599363 (systemd)[1868]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:18:38.813260 systemd[1868]: Queued start job for default target default.target. Jan 14 13:18:38.818097 systemd[1868]: Created slice app.slice - User Application Slice. Jan 14 13:18:38.818138 systemd[1868]: Reached target paths.target - Paths. Jan 14 13:18:38.818157 systemd[1868]: Reached target timers.target - Timers. Jan 14 13:18:38.819831 systemd[1868]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:18:38.843043 systemd[1868]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:18:38.843213 systemd[1868]: Reached target sockets.target - Sockets. Jan 14 13:18:38.843264 systemd[1868]: Reached target basic.target - Basic System. Jan 14 13:18:38.843312 systemd[1868]: Reached target default.target - Main User Target. Jan 14 13:18:38.843352 systemd[1868]: Startup finished in 236ms. Jan 14 13:18:38.843486 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:18:38.850801 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:18:38.851879 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:18:39.136744 kubelet[1856]: E0114 13:18:39.136057 1856 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:18:39.141286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:18:39.141740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:18:39.243306 waagent[1836]: 2025-01-14T13:18:39.243187Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:18:39.259121 waagent[1836]: 2025-01-14T13:18:39.258976Z INFO Daemon Daemon OS: flatcar 4152.2.0 Jan 14 13:18:39.261911 waagent[1836]: 2025-01-14T13:18:39.261815Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:18:39.264579 waagent[1836]: 2025-01-14T13:18:39.264500Z INFO Daemon Daemon Run daemon Jan 14 13:18:39.267231 waagent[1836]: 2025-01-14T13:18:39.267165Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.0' Jan 14 13:18:39.272705 waagent[1836]: 2025-01-14T13:18:39.272601Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:18:39.280996 waagent[1836]: 2025-01-14T13:18:39.274340Z INFO Daemon Daemon Activate resource disk Jan 14 13:18:39.280996 waagent[1836]: 2025-01-14T13:18:39.275988Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:18:39.281787 waagent[1836]: 2025-01-14T13:18:39.281737Z INFO Daemon Daemon Found device: None Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.282773Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.283184Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.284629Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.285200Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.293495Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.296198Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.297201Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:18:39.312522 waagent[1836]: 2025-01-14T13:18:39.297627Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:18:39.399660 waagent[1836]: 2025-01-14T13:18:39.396449Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:18:39.411738 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:18:39.413498 waagent[1836]: 2025-01-14T13:18:39.413433Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:18:39.416188 waagent[1836]: 2025-01-14T13:18:39.416129Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:18:39.429238 waagent[1836]: 2025-01-14T13:18:39.417176Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:18:39.429238 waagent[1836]: 2025-01-14T13:18:39.417589Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:18:39.429238 waagent[1836]: 2025-01-14T13:18:39.418393Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:18:39.429238 waagent[1836]: 2025-01-14T13:18:39.419166Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:18:39.464198 waagent[1836]: 2025-01-14T13:18:39.464126Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:18:39.472154 waagent[1836]: 2025-01-14T13:18:39.465555Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:18:39.472154 waagent[1836]: 2025-01-14T13:18:39.465903Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:18:39.607000 waagent[1836]: 2025-01-14T13:18:39.606888Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:18:39.610248 waagent[1836]: 2025-01-14T13:18:39.610171Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:18:39.617144 waagent[1836]: 2025-01-14T13:18:39.617085Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:18:39.633317 waagent[1836]: 2025-01-14T13:18:39.633259Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 14 13:18:39.651361 waagent[1836]: 2025-01-14T13:18:39.634866Z INFO Daemon Jan 14 13:18:39.651361 waagent[1836]: 2025-01-14T13:18:39.636678Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9b014bd0-a920-44c7-881e-27367af79a09 eTag: 3516232480045815650 source: Fabric] Jan 14 13:18:39.651361 waagent[1836]: 2025-01-14T13:18:39.638060Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:18:39.651361 waagent[1836]: 2025-01-14T13:18:39.639053Z INFO Daemon Jan 14 13:18:39.651361 waagent[1836]: 2025-01-14T13:18:39.639895Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:18:39.651361 waagent[1836]: 2025-01-14T13:18:39.644889Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:18:39.708307 waagent[1836]: 2025-01-14T13:18:39.708223Z INFO Daemon Downloaded certificate {'thumbprint': 'A271EFD7C264D0B50D90FEF9AD8AC8437744DE5F', 'hasPrivateKey': True} Jan 14 13:18:39.713231 waagent[1836]: 2025-01-14T13:18:39.713168Z INFO Daemon Fetch goal state completed Jan 14 13:18:39.721834 waagent[1836]: 2025-01-14T13:18:39.721784Z INFO Daemon Daemon Starting provisioning Jan 14 13:18:39.724788 waagent[1836]: 2025-01-14T13:18:39.724732Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:18:39.729729 waagent[1836]: 2025-01-14T13:18:39.725767Z INFO Daemon Daemon Set hostname [ci-4152.2.0-a-ae9609fe4e] Jan 14 13:18:39.742752 waagent[1836]: 2025-01-14T13:18:39.742670Z INFO Daemon Daemon Publish hostname [ci-4152.2.0-a-ae9609fe4e] Jan 14 13:18:39.750489 waagent[1836]: 2025-01-14T13:18:39.743942Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:18:39.750489 waagent[1836]: 2025-01-14T13:18:39.744801Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:18:39.777174 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:39.777184 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:18:39.777238 systemd-networkd[1438]: eth0: DHCP lease lost Jan 14 13:18:39.778759 waagent[1836]: 2025-01-14T13:18:39.778560Z INFO Daemon Daemon Create user account if not exists Jan 14 13:18:39.798225 waagent[1836]: 2025-01-14T13:18:39.780069Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:18:39.798225 waagent[1836]: 2025-01-14T13:18:39.780944Z INFO Daemon Daemon Configure sudoer Jan 14 13:18:39.798225 waagent[1836]: 2025-01-14T13:18:39.782130Z INFO Daemon Daemon Configure sshd Jan 14 13:18:39.798225 waagent[1836]: 2025-01-14T13:18:39.782491Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:18:39.798225 waagent[1836]: 2025-01-14T13:18:39.783245Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:18:39.798717 systemd-networkd[1438]: eth0: DHCPv6 lease lost Jan 14 13:18:39.842664 systemd-networkd[1438]: eth0: DHCPv4 address 10.200.4.47/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:18:49.391960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:18:49.397976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:18:49.514862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:18:49.519555 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:18:50.079341 kubelet[1930]: E0114 13:18:50.079270 1930 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:18:50.083664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:18:50.083866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:00.334364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:19:00.339875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:00.361291 chronyd[1679]: Selected source PHC0 Jan 14 13:19:00.432701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:00.441914 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:19:00.923964 kubelet[1947]: E0114 13:19:00.923898 1947 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:19:00.926698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:19:00.926904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:09.876780 waagent[1836]: 2025-01-14T13:19:09.876691Z INFO Daemon Daemon Provisioning complete Jan 14 13:19:09.891037 waagent[1836]: 2025-01-14T13:19:09.890960Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:19:09.899046 waagent[1836]: 2025-01-14T13:19:09.892759Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:19:09.899046 waagent[1836]: 2025-01-14T13:19:09.893220Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:19:10.023308 waagent[1955]: 2025-01-14T13:19:10.023196Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:19:10.023778 waagent[1955]: 2025-01-14T13:19:10.023384Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.0 Jan 14 13:19:10.023778 waagent[1955]: 2025-01-14T13:19:10.023470Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:19:10.092369 waagent[1955]: 2025-01-14T13:19:10.092262Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:19:10.092654 waagent[1955]: 2025-01-14T13:19:10.092573Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:19:10.092768 waagent[1955]: 2025-01-14T13:19:10.092721Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:19:10.100769 waagent[1955]: 2025-01-14T13:19:10.100694Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:19:10.109879 waagent[1955]: 2025-01-14T13:19:10.109820Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 14 13:19:10.110408 waagent[1955]: 2025-01-14T13:19:10.110346Z INFO ExtHandler Jan 14 13:19:10.110504 waagent[1955]: 2025-01-14T13:19:10.110446Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4251e4be-20ac-401f-a7d8-7b13ca0873ec eTag: 3516232480045815650 source: Fabric] Jan 14 13:19:10.110847 waagent[1955]: 2025-01-14T13:19:10.110795Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:19:10.111441 waagent[1955]: 2025-01-14T13:19:10.111384Z INFO ExtHandler Jan 14 13:19:10.111522 waagent[1955]: 2025-01-14T13:19:10.111471Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:19:10.115544 waagent[1955]: 2025-01-14T13:19:10.115500Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:19:10.184914 waagent[1955]: 2025-01-14T13:19:10.184750Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A271EFD7C264D0B50D90FEF9AD8AC8437744DE5F', 'hasPrivateKey': True} Jan 14 13:19:10.185449 waagent[1955]: 2025-01-14T13:19:10.185388Z INFO ExtHandler Fetch goal state completed Jan 14 13:19:10.198865 waagent[1955]: 2025-01-14T13:19:10.198792Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1955 Jan 14 13:19:10.199030 waagent[1955]: 2025-01-14T13:19:10.198979Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:19:10.200635 waagent[1955]: 2025-01-14T13:19:10.200560Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:19:10.201015 waagent[1955]: 2025-01-14T13:19:10.200963Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:19:10.240337 waagent[1955]: 2025-01-14T13:19:10.240283Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:19:10.240581 waagent[1955]: 2025-01-14T13:19:10.240531Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:19:10.247339 waagent[1955]: 2025-01-14T13:19:10.247293Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:19:10.254878 systemd[1]: Reloading requested from client PID 1968 ('systemctl') (unit waagent.service)... Jan 14 13:19:10.254896 systemd[1]: Reloading... Jan 14 13:19:10.342670 zram_generator::config[2005]: No configuration found. Jan 14 13:19:10.464135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:19:10.547474 systemd[1]: Reloading finished in 292 ms. Jan 14 13:19:10.573526 waagent[1955]: 2025-01-14T13:19:10.573402Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:19:10.582861 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit waagent.service)... Jan 14 13:19:10.582877 systemd[1]: Reloading... Jan 14 13:19:10.662635 zram_generator::config[2093]: No configuration found. Jan 14 13:19:10.781181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:19:10.864329 systemd[1]: Reloading finished in 281 ms. Jan 14 13:19:10.890872 waagent[1955]: 2025-01-14T13:19:10.890489Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:19:10.890872 waagent[1955]: 2025-01-14T13:19:10.890779Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:19:11.126017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:19:11.135827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:11.329269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:11.334269 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:19:11.775460 kubelet[2165]: E0114 13:19:11.774844 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:19:11.779941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:19:11.780137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:11.871253 waagent[1955]: 2025-01-14T13:19:11.871154Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:19:11.872010 waagent[1955]: 2025-01-14T13:19:11.871944Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:19:11.872829 waagent[1955]: 2025-01-14T13:19:11.872770Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:19:11.872963 waagent[1955]: 2025-01-14T13:19:11.872911Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:19:11.873413 waagent[1955]: 2025-01-14T13:19:11.873357Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:19:11.873542 waagent[1955]: 2025-01-14T13:19:11.873491Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:19:11.873640 waagent[1955]: 2025-01-14T13:19:11.873575Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:19:11.873912 waagent[1955]: 2025-01-14T13:19:11.873862Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:19:11.874583 waagent[1955]: 2025-01-14T13:19:11.874533Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:19:11.874740 waagent[1955]: 2025-01-14T13:19:11.874697Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:19:11.875089 waagent[1955]: 2025-01-14T13:19:11.875014Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:19:11.875467 waagent[1955]: 2025-01-14T13:19:11.875395Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:19:11.875578 waagent[1955]: 2025-01-14T13:19:11.875510Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:19:11.875578 waagent[1955]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:19:11.875578 waagent[1955]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:19:11.875578 waagent[1955]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:19:11.875578 waagent[1955]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:19:11.875578 waagent[1955]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:19:11.875578 waagent[1955]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:19:11.875811 waagent[1955]: 2025-01-14T13:19:11.875588Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:19:11.876341 waagent[1955]: 2025-01-14T13:19:11.876251Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:19:11.877400 waagent[1955]: 2025-01-14T13:19:11.877358Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:19:11.878879 waagent[1955]: 2025-01-14T13:19:11.878833Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:19:11.880143 waagent[1955]: 2025-01-14T13:19:11.880097Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:19:11.884275 waagent[1955]: 2025-01-14T13:19:11.884214Z INFO ExtHandler ExtHandler Jan 14 13:19:11.884369 waagent[1955]: 2025-01-14T13:19:11.884326Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 940a1d57-42e1-4655-b3dc-c3407a92b582 correlation 54fa52a1-db90-4f8b-9d53-b356c039561c created: 2025-01-14T13:17:31.329063Z] Jan 14 13:19:11.884759 waagent[1955]: 2025-01-14T13:19:11.884715Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:19:11.885278 waagent[1955]: 2025-01-14T13:19:11.885233Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 13:19:11.924240 waagent[1955]: 2025-01-14T13:19:11.923634Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C3F4A218-6A5A-4F15-A188-9730FEC048D0;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:19:11.929015 waagent[1955]: 2025-01-14T13:19:11.928941Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:19:11.929015 waagent[1955]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:19:11.929015 waagent[1955]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:19:11.929015 waagent[1955]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:0e:bc:e9 brd ff:ff:ff:ff:ff:ff Jan 14 13:19:11.929015 waagent[1955]: 3: enP56148s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:0e:bc:e9 brd ff:ff:ff:ff:ff:ff\ altname enP56148p0s2 Jan 14 13:19:11.929015 waagent[1955]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:19:11.929015 waagent[1955]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:19:11.929015 waagent[1955]: 2: eth0 inet 10.200.4.47/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:19:11.929015 waagent[1955]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:19:11.929015 waagent[1955]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:19:11.929015 waagent[1955]: 2: eth0 inet6 fe80::6245:bdff:fe0e:bce9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:19:11.929015 waagent[1955]: 3: enP56148s1 inet6 fe80::6245:bdff:fe0e:bce9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:19:12.005171 waagent[1955]: 2025-01-14T13:19:12.005087Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:19:12.005171 waagent[1955]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:19:12.005171 waagent[1955]: pkts bytes target prot opt in out source destination Jan 14 13:19:12.005171 waagent[1955]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:19:12.005171 waagent[1955]: pkts bytes target prot opt in out source destination Jan 14 13:19:12.005171 waagent[1955]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:19:12.005171 waagent[1955]: pkts bytes target prot opt in out source destination Jan 14 13:19:12.005171 waagent[1955]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:19:12.005171 waagent[1955]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:19:12.005171 waagent[1955]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:19:12.008843 waagent[1955]: 2025-01-14T13:19:12.008780Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:19:12.008843 waagent[1955]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:19:12.008843 waagent[1955]: pkts bytes target prot opt in out source destination Jan 14 13:19:12.008843 waagent[1955]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:19:12.008843 waagent[1955]: pkts bytes target prot opt in out source destination Jan 14 13:19:12.008843 waagent[1955]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:19:12.008843 waagent[1955]: pkts bytes target prot opt in out source destination Jan 14 13:19:12.008843 waagent[1955]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:19:12.008843 waagent[1955]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:19:12.008843 waagent[1955]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:19:12.009239 waagent[1955]: 2025-01-14T13:19:12.009103Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:19:21.179438 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:19:21.914105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:19:21.919846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:22.033242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:22.044934 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:19:22.315749 update_engine[1688]: I20250114 13:19:22.315572 1688 update_attempter.cc:509] Updating boot flags... Jan 14 13:19:22.560802 kubelet[2208]: E0114 13:19:22.560740 2208 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:19:22.563566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:19:22.563800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:22.656346 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2231) Jan 14 13:19:22.790997 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2235) Jan 14 13:19:32.663994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:19:32.676901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:32.775565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:32.780168 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:19:32.825304 kubelet[2338]: E0114 13:19:32.825237 2338 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:19:32.828036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:19:32.828240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:39.670234 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:19:39.677912 systemd[1]: Started sshd@0-10.200.4.47:22-10.200.16.10:60648.service - OpenSSH per-connection server daemon (10.200.16.10:60648). Jan 14 13:19:40.372241 sshd[2347]: Accepted publickey for core from 10.200.16.10 port 60648 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:19:40.373825 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:19:40.380279 systemd-logind[1684]: New session 3 of user core. Jan 14 13:19:40.388218 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:19:40.915901 systemd[1]: Started sshd@1-10.200.4.47:22-10.200.16.10:60662.service - OpenSSH per-connection server daemon (10.200.16.10:60662). Jan 14 13:19:41.522468 sshd[2352]: Accepted publickey for core from 10.200.16.10 port 60662 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:19:41.523981 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:19:41.529801 systemd-logind[1684]: New session 4 of user core. Jan 14 13:19:41.539007 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:19:41.963514 sshd[2354]: Connection closed by 10.200.16.10 port 60662 Jan 14 13:19:41.964665 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Jan 14 13:19:41.967918 systemd[1]: sshd@1-10.200.4.47:22-10.200.16.10:60662.service: Deactivated successfully. Jan 14 13:19:41.970286 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:19:41.972088 systemd-logind[1684]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:19:41.973264 systemd-logind[1684]: Removed session 4. Jan 14 13:19:42.071687 systemd[1]: Started sshd@2-10.200.4.47:22-10.200.16.10:60676.service - OpenSSH per-connection server daemon (10.200.16.10:60676). Jan 14 13:19:42.684313 sshd[2359]: Accepted publickey for core from 10.200.16.10 port 60676 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:19:42.686055 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:19:42.690432 systemd-logind[1684]: New session 5 of user core. Jan 14 13:19:42.699180 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:19:42.914057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 13:19:42.920844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:43.016974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:43.021958 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:19:43.121546 sshd[2361]: Connection closed by 10.200.16.10 port 60676 Jan 14 13:19:43.122460 sshd-session[2359]: pam_unix(sshd:session): session closed for user core Jan 14 13:19:43.126170 systemd[1]: sshd@2-10.200.4.47:22-10.200.16.10:60676.service: Deactivated successfully. Jan 14 13:19:43.127923 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:19:43.128582 systemd-logind[1684]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:19:43.129486 systemd-logind[1684]: Removed session 5. Jan 14 13:19:43.228684 systemd[1]: Started sshd@3-10.200.4.47:22-10.200.16.10:60680.service - OpenSSH per-connection server daemon (10.200.16.10:60680). Jan 14 13:19:43.582693 kubelet[2370]: E0114 13:19:43.582633 2370 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:19:43.585473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:19:43.585684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:43.835412 sshd[2380]: Accepted publickey for core from 10.200.16.10 port 60680 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:19:43.836959 sshd-session[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:19:43.842147 systemd-logind[1684]: New session 6 of user core. Jan 14 13:19:43.848769 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:19:44.269490 sshd[2384]: Connection closed by 10.200.16.10 port 60680 Jan 14 13:19:44.270370 sshd-session[2380]: pam_unix(sshd:session): session closed for user core Jan 14 13:19:44.273703 systemd[1]: sshd@3-10.200.4.47:22-10.200.16.10:60680.service: Deactivated successfully. Jan 14 13:19:44.276112 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:19:44.277826 systemd-logind[1684]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:19:44.278899 systemd-logind[1684]: Removed session 6. Jan 14 13:19:44.380889 systemd[1]: Started sshd@4-10.200.4.47:22-10.200.16.10:60686.service - OpenSSH per-connection server daemon (10.200.16.10:60686). Jan 14 13:19:44.988813 sshd[2389]: Accepted publickey for core from 10.200.16.10 port 60686 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:19:44.990505 sshd-session[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:19:44.996092 systemd-logind[1684]: New session 7 of user core. Jan 14 13:19:45.002781 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:19:45.513989 sudo[2392]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:19:45.514463 sudo[2392]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:19:45.542242 sudo[2392]: pam_unix(sudo:session): session closed for user root Jan 14 13:19:45.641976 sshd[2391]: Connection closed by 10.200.16.10 port 60686 Jan 14 13:19:45.643239 sshd-session[2389]: pam_unix(sshd:session): session closed for user core Jan 14 13:19:45.646948 systemd[1]: sshd@4-10.200.4.47:22-10.200.16.10:60686.service: Deactivated successfully. Jan 14 13:19:45.649280 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:19:45.650972 systemd-logind[1684]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:19:45.652124 systemd-logind[1684]: Removed session 7. Jan 14 13:19:45.750688 systemd[1]: Started sshd@5-10.200.4.47:22-10.200.16.10:60696.service - OpenSSH per-connection server daemon (10.200.16.10:60696). Jan 14 13:19:46.361880 sshd[2397]: Accepted publickey for core from 10.200.16.10 port 60696 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:19:46.363667 sshd-session[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:19:46.368775 systemd-logind[1684]: New session 8 of user core. Jan 14 13:19:46.374762 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:19:46.698045 sudo[2401]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:19:46.698402 sudo[2401]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:19:46.701868 sudo[2401]: pam_unix(sudo:session): session closed for user root Jan 14 13:19:46.706860 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:19:46.707203 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:19:46.719992 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:19:46.746657 augenrules[2423]: No rules Jan 14 13:19:46.748091 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:19:46.748318 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:19:46.749746 sudo[2400]: pam_unix(sudo:session): session closed for user root Jan 14 13:19:46.856906 sshd[2399]: Connection closed by 10.200.16.10 port 60696 Jan 14 13:19:46.857837 sshd-session[2397]: pam_unix(sshd:session): session closed for user core Jan 14 13:19:46.860869 systemd[1]: sshd@5-10.200.4.47:22-10.200.16.10:60696.service: Deactivated successfully. Jan 14 13:19:46.862830 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:19:46.864230 systemd-logind[1684]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:19:46.865377 systemd-logind[1684]: Removed session 8. Jan 14 13:19:46.967917 systemd[1]: Started sshd@6-10.200.4.47:22-10.200.16.10:53136.service - OpenSSH per-connection server daemon (10.200.16.10:53136). Jan 14 13:19:47.583160 sshd[2431]: Accepted publickey for core from 10.200.16.10 port 53136 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:19:47.584880 sshd-session[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:19:47.589706 systemd-logind[1684]: New session 9 of user core. Jan 14 13:19:47.598789 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:19:47.917824 sudo[2434]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:19:47.918186 sudo[2434]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:19:49.393985 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:19:49.394565 (dockerd)[2451]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:19:50.848179 dockerd[2451]: time="2025-01-14T13:19:50.848111660Z" level=info msg="Starting up" Jan 14 13:19:51.296169 dockerd[2451]: time="2025-01-14T13:19:51.295850196Z" level=info msg="Loading containers: start." Jan 14 13:19:51.511796 kernel: Initializing XFRM netlink socket Jan 14 13:19:51.649770 systemd-networkd[1438]: docker0: Link UP Jan 14 13:19:51.727885 dockerd[2451]: time="2025-01-14T13:19:51.727830780Z" level=info msg="Loading containers: done." Jan 14 13:19:51.770892 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2364514570-merged.mount: Deactivated successfully. Jan 14 13:19:51.782726 dockerd[2451]: time="2025-01-14T13:19:51.782669253Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:19:51.782874 dockerd[2451]: time="2025-01-14T13:19:51.782810457Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 14 13:19:51.782991 dockerd[2451]: time="2025-01-14T13:19:51.782968261Z" level=info msg="Daemon has completed initialization" Jan 14 13:19:51.849479 dockerd[2451]: time="2025-01-14T13:19:51.849105057Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:19:51.849437 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:19:53.409200 containerd[1713]: time="2025-01-14T13:19:53.407772042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 14 13:19:53.663997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 13:19:53.669170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:53.777439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:53.787946 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:19:53.835147 kubelet[2647]: E0114 13:19:53.835079 2647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:19:53.837918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:19:53.838123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:54.845119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124443233.mount: Deactivated successfully. Jan 14 13:19:56.801419 containerd[1713]: time="2025-01-14T13:19:56.801358229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:56.805168 containerd[1713]: time="2025-01-14T13:19:56.804948832Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 14 13:19:56.810338 containerd[1713]: time="2025-01-14T13:19:56.810272884Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:56.814598 containerd[1713]: time="2025-01-14T13:19:56.814545307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:56.815569 containerd[1713]: time="2025-01-14T13:19:56.815532335Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.407709191s" Jan 14 13:19:56.815667 containerd[1713]: time="2025-01-14T13:19:56.815576136Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 14 13:19:56.836830 containerd[1713]: time="2025-01-14T13:19:56.836782344Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 14 13:19:58.831911 containerd[1713]: time="2025-01-14T13:19:58.831853718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:58.835402 containerd[1713]: time="2025-01-14T13:19:58.835338918Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 14 13:19:58.839027 containerd[1713]: time="2025-01-14T13:19:58.838974122Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:58.843839 containerd[1713]: time="2025-01-14T13:19:58.843806761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:58.844947 containerd[1713]: time="2025-01-14T13:19:58.844761588Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.007938543s" Jan 14 13:19:58.844947 containerd[1713]: time="2025-01-14T13:19:58.844799289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 14 13:19:58.869961 containerd[1713]: time="2025-01-14T13:19:58.869917509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 14 13:20:00.210751 containerd[1713]: time="2025-01-14T13:20:00.210689133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:00.213259 containerd[1713]: time="2025-01-14T13:20:00.213176404Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 14 13:20:00.217245 containerd[1713]: time="2025-01-14T13:20:00.217211620Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:00.223740 containerd[1713]: time="2025-01-14T13:20:00.223682305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:00.226631 containerd[1713]: time="2025-01-14T13:20:00.225185248Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.355216438s" Jan 14 13:20:00.226631 containerd[1713]: time="2025-01-14T13:20:00.225233750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 14 13:20:00.253232 containerd[1713]: time="2025-01-14T13:20:00.253186051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 14 13:20:01.512197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524373622.mount: Deactivated successfully. Jan 14 13:20:02.105631 containerd[1713]: time="2025-01-14T13:20:02.105543735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:02.109298 containerd[1713]: time="2025-01-14T13:20:02.109227841Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 14 13:20:02.115303 containerd[1713]: time="2025-01-14T13:20:02.115228913Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:02.122094 containerd[1713]: time="2025-01-14T13:20:02.121981006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:02.123244 containerd[1713]: time="2025-01-14T13:20:02.122745128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.869325571s" Jan 14 13:20:02.123244 containerd[1713]: time="2025-01-14T13:20:02.122786429Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 14 13:20:02.147127 containerd[1713]: time="2025-01-14T13:20:02.147069525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 13:20:02.867293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142655987.mount: Deactivated successfully. Jan 14 13:20:03.914197 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 14 13:20:03.921946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:20:04.067806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:20:04.068961 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:20:04.140636 kubelet[2797]: E0114 13:20:04.139740 2797 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:20:04.143512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:20:04.143751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:20:04.697740 containerd[1713]: time="2025-01-14T13:20:04.697677607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:04.701130 containerd[1713]: time="2025-01-14T13:20:04.701059404Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 14 13:20:04.705015 containerd[1713]: time="2025-01-14T13:20:04.704958416Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:04.712496 containerd[1713]: time="2025-01-14T13:20:04.712428929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:04.714370 containerd[1713]: time="2025-01-14T13:20:04.713461159Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.566341533s" Jan 14 13:20:04.714370 containerd[1713]: time="2025-01-14T13:20:04.713501460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 14 13:20:04.737255 containerd[1713]: time="2025-01-14T13:20:04.737220439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 13:20:05.406720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281491890.mount: Deactivated successfully. Jan 14 13:20:05.428115 containerd[1713]: time="2025-01-14T13:20:05.428044800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:05.431251 containerd[1713]: time="2025-01-14T13:20:05.431180490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 14 13:20:05.435944 containerd[1713]: time="2025-01-14T13:20:05.435878624Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:05.441507 containerd[1713]: time="2025-01-14T13:20:05.441450183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:05.442375 containerd[1713]: time="2025-01-14T13:20:05.442214605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 704.77106ms" Jan 14 13:20:05.442375 containerd[1713]: time="2025-01-14T13:20:05.442252906Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 14 13:20:05.466563 containerd[1713]: time="2025-01-14T13:20:05.466512000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 14 13:20:06.219888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199183120.mount: Deactivated successfully. Jan 14 13:20:08.543696 containerd[1713]: time="2025-01-14T13:20:08.543627723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:08.546326 containerd[1713]: time="2025-01-14T13:20:08.546271098Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 14 13:20:08.550445 containerd[1713]: time="2025-01-14T13:20:08.550385316Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:08.555080 containerd[1713]: time="2025-01-14T13:20:08.555027249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:08.556132 containerd[1713]: time="2025-01-14T13:20:08.556090179Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.089536378s" Jan 14 13:20:08.556221 containerd[1713]: time="2025-01-14T13:20:08.556137681Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 14 13:20:11.246395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:20:11.252907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:20:11.282154 systemd[1]: Reloading requested from client PID 2926 ('systemctl') (unit session-9.scope)... Jan 14 13:20:11.282178 systemd[1]: Reloading... Jan 14 13:20:11.384654 zram_generator::config[2966]: No configuration found. Jan 14 13:20:11.525582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:20:11.619289 systemd[1]: Reloading finished in 336 ms. Jan 14 13:20:11.668305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:20:11.678237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:20:11.678705 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:20:11.678938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:20:11.681812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:20:11.901201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:20:11.910969 (kubelet)[3039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:20:11.957762 kubelet[3039]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:20:11.957762 kubelet[3039]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:20:11.957762 kubelet[3039]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:20:11.958271 kubelet[3039]: I0114 13:20:11.957824 3039 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:20:12.415996 kubelet[3039]: I0114 13:20:12.415955 3039 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:20:12.415996 kubelet[3039]: I0114 13:20:12.415988 3039 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:20:12.416292 kubelet[3039]: I0114 13:20:12.416268 3039 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:20:12.526741 kubelet[3039]: E0114 13:20:12.526678 3039 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.527155 kubelet[3039]: I0114 13:20:12.527124 3039 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:20:12.542626 kubelet[3039]: I0114 13:20:12.542228 3039 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:20:12.542626 kubelet[3039]: I0114 13:20:12.542501 3039 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:20:12.542892 kubelet[3039]: I0114 13:20:12.542859 3039 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:20:12.543756 kubelet[3039]: I0114 13:20:12.543730 3039 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:20:12.543756 kubelet[3039]: I0114 13:20:12.543757 3039 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:20:12.543912 kubelet[3039]: I0114 13:20:12.543894 3039 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:20:12.544028 kubelet[3039]: I0114 13:20:12.544014 3039 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:20:12.544083 kubelet[3039]: I0114 13:20:12.544036 3039 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:20:12.544083 kubelet[3039]: I0114 13:20:12.544073 3039 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:20:12.544283 kubelet[3039]: I0114 13:20:12.544094 3039 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:20:12.548630 kubelet[3039]: W0114 13:20:12.547436 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ae9609fe4e&limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.548630 kubelet[3039]: E0114 13:20:12.547511 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ae9609fe4e&limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.548630 kubelet[3039]: W0114 13:20:12.547585 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.548630 kubelet[3039]: E0114 13:20:12.547641 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.549368 kubelet[3039]: I0114 13:20:12.549350 3039 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:20:12.553167 kubelet[3039]: I0114 13:20:12.553144 3039 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:20:12.553346 kubelet[3039]: W0114 13:20:12.553333 3039 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:20:12.555434 kubelet[3039]: I0114 13:20:12.555411 3039 server.go:1256] "Started kubelet" Jan 14 13:20:12.556046 kubelet[3039]: I0114 13:20:12.556010 3039 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:20:12.557023 kubelet[3039]: I0114 13:20:12.556999 3039 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:20:12.559380 kubelet[3039]: I0114 13:20:12.559226 3039 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:20:12.562476 kubelet[3039]: I0114 13:20:12.562456 3039 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:20:12.562846 kubelet[3039]: I0114 13:20:12.562822 3039 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:20:12.567357 kubelet[3039]: E0114 13:20:12.567328 3039 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.47:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-ae9609fe4e.181a91b4eca8edc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-ae9609fe4e,UID:ci-4152.2.0-a-ae9609fe4e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-ae9609fe4e,},FirstTimestamp:2025-01-14 13:20:12.555382216 +0000 UTC m=+0.640118447,LastTimestamp:2025-01-14 13:20:12.555382216 +0000 UTC m=+0.640118447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-ae9609fe4e,}" Jan 14 13:20:12.567527 kubelet[3039]: I0114 13:20:12.567457 3039 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:20:12.567575 kubelet[3039]: I0114 13:20:12.567565 3039 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:20:12.568680 kubelet[3039]: I0114 13:20:12.568346 3039 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:20:12.568870 kubelet[3039]: W0114 13:20:12.568823 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.568931 kubelet[3039]: E0114 13:20:12.568889 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.569419 kubelet[3039]: E0114 13:20:12.569395 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ae9609fe4e?timeout=10s\": dial tcp 10.200.4.47:6443: connect: connection refused" interval="200ms" Jan 14 13:20:12.570858 kubelet[3039]: I0114 13:20:12.570440 3039 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:20:12.570858 kubelet[3039]: I0114 13:20:12.570529 3039 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:20:12.571811 kubelet[3039]: I0114 13:20:12.571792 3039 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:20:12.578994 kubelet[3039]: E0114 13:20:12.578966 3039 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:20:12.599535 kubelet[3039]: I0114 13:20:12.599369 3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:20:12.600988 kubelet[3039]: I0114 13:20:12.600939 3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:20:12.600988 kubelet[3039]: I0114 13:20:12.600977 3039 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:20:12.602157 kubelet[3039]: I0114 13:20:12.601004 3039 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:20:12.602157 kubelet[3039]: E0114 13:20:12.601061 3039 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:20:12.602736 kubelet[3039]: W0114 13:20:12.602573 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.602736 kubelet[3039]: E0114 13:20:12.602699 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:12.631693 kubelet[3039]: I0114 13:20:12.631659 3039 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:20:12.631858 kubelet[3039]: I0114 13:20:12.631708 3039 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:20:12.631858 kubelet[3039]: I0114 13:20:12.631734 3039 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:20:12.638453 kubelet[3039]: I0114 13:20:12.638411 3039 policy_none.go:49] "None policy: Start" Jan 14 13:20:12.639207 kubelet[3039]: I0114 13:20:12.639179 3039 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:20:12.639288 kubelet[3039]: I0114 13:20:12.639213 3039 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:20:12.647824 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:20:12.658904 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:20:12.662340 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:20:12.670760 kubelet[3039]: I0114 13:20:12.670147 3039 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.670760 kubelet[3039]: E0114 13:20:12.670564 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.47:6443/api/v1/nodes\": dial tcp 10.200.4.47:6443: connect: connection refused" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.670927 kubelet[3039]: I0114 13:20:12.670860 3039 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:20:12.671189 kubelet[3039]: I0114 13:20:12.671149 3039 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:20:12.673529 kubelet[3039]: E0114 13:20:12.673460 3039 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-a-ae9609fe4e\" not found" Jan 14 13:20:12.701988 kubelet[3039]: I0114 13:20:12.701924 3039 topology_manager.go:215] "Topology Admit Handler" podUID="2d322d43db32ac4cc21c6f5effa6a133" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.704156 kubelet[3039]: I0114 13:20:12.704124 3039 topology_manager.go:215] "Topology Admit Handler" podUID="e06af29120d1526a46702bccfd2cd2f8" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.706023 kubelet[3039]: I0114 13:20:12.705727 3039 topology_manager.go:215] "Topology Admit Handler" podUID="0ee4eb97b560217fbc1ab50f4cc5549d" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.712744 systemd[1]: Created slice kubepods-burstable-pod2d322d43db32ac4cc21c6f5effa6a133.slice - libcontainer container kubepods-burstable-pod2d322d43db32ac4cc21c6f5effa6a133.slice. Jan 14 13:20:12.737467 systemd[1]: Created slice kubepods-burstable-pode06af29120d1526a46702bccfd2cd2f8.slice - libcontainer container kubepods-burstable-pode06af29120d1526a46702bccfd2cd2f8.slice. Jan 14 13:20:12.742198 systemd[1]: Created slice kubepods-burstable-pod0ee4eb97b560217fbc1ab50f4cc5549d.slice - libcontainer container kubepods-burstable-pod0ee4eb97b560217fbc1ab50f4cc5549d.slice. Jan 14 13:20:12.769121 kubelet[3039]: I0114 13:20:12.769070 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769353 kubelet[3039]: I0114 13:20:12.769184 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769353 kubelet[3039]: I0114 13:20:12.769237 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769353 kubelet[3039]: I0114 13:20:12.769270 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ee4eb97b560217fbc1ab50f4cc5549d-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-ae9609fe4e\" (UID: \"0ee4eb97b560217fbc1ab50f4cc5549d\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769353 kubelet[3039]: I0114 13:20:12.769303 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d322d43db32ac4cc21c6f5effa6a133-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" (UID: \"2d322d43db32ac4cc21c6f5effa6a133\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769353 kubelet[3039]: I0114 13:20:12.769336 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d322d43db32ac4cc21c6f5effa6a133-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" (UID: \"2d322d43db32ac4cc21c6f5effa6a133\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769667 kubelet[3039]: I0114 13:20:12.769369 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769667 kubelet[3039]: I0114 13:20:12.769404 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d322d43db32ac4cc21c6f5effa6a133-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" (UID: \"2d322d43db32ac4cc21c6f5effa6a133\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769667 kubelet[3039]: I0114 13:20:12.769458 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.769964 kubelet[3039]: E0114 13:20:12.769939 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ae9609fe4e?timeout=10s\": dial tcp 10.200.4.47:6443: connect: connection refused" interval="400ms" Jan 14 13:20:12.872746 kubelet[3039]: I0114 13:20:12.872703 3039 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:12.873145 kubelet[3039]: E0114 13:20:12.873116 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.47:6443/api/v1/nodes\": dial tcp 10.200.4.47:6443: connect: connection refused" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:13.037221 containerd[1713]: time="2025-01-14T13:20:13.036993847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-ae9609fe4e,Uid:2d322d43db32ac4cc21c6f5effa6a133,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:13.041060 containerd[1713]: time="2025-01-14T13:20:13.040764452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-ae9609fe4e,Uid:e06af29120d1526a46702bccfd2cd2f8,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:13.045022 containerd[1713]: time="2025-01-14T13:20:13.044918568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-ae9609fe4e,Uid:0ee4eb97b560217fbc1ab50f4cc5549d,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:13.171148 kubelet[3039]: E0114 13:20:13.171104 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ae9609fe4e?timeout=10s\": dial tcp 10.200.4.47:6443: connect: connection refused" interval="800ms" Jan 14 13:20:13.276125 kubelet[3039]: I0114 13:20:13.276012 3039 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:13.276572 kubelet[3039]: E0114 13:20:13.276545 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.47:6443/api/v1/nodes\": dial tcp 10.200.4.47:6443: connect: connection refused" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:13.436035 kubelet[3039]: W0114 13:20:13.435967 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.436035 kubelet[3039]: E0114 13:20:13.436040 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.641399 kubelet[3039]: W0114 13:20:13.641354 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ae9609fe4e&limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.641399 kubelet[3039]: E0114 13:20:13.641400 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ae9609fe4e&limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.663634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630344337.mount: Deactivated successfully. Jan 14 13:20:13.694205 containerd[1713]: time="2025-01-14T13:20:13.694062372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:20:13.708691 containerd[1713]: time="2025-01-14T13:20:13.708523075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:20:13.712751 containerd[1713]: time="2025-01-14T13:20:13.712711392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:20:13.716568 containerd[1713]: time="2025-01-14T13:20:13.716529798Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:20:13.730464 containerd[1713]: time="2025-01-14T13:20:13.730017474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:20:13.735068 containerd[1713]: time="2025-01-14T13:20:13.734989213Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:20:13.738943 containerd[1713]: time="2025-01-14T13:20:13.738900422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:20:13.739928 containerd[1713]: time="2025-01-14T13:20:13.739876049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.858679ms" Jan 14 13:20:13.741515 containerd[1713]: time="2025-01-14T13:20:13.741398892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:20:13.745181 containerd[1713]: time="2025-01-14T13:20:13.745146096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 708.019645ms" Jan 14 13:20:13.753103 containerd[1713]: time="2025-01-14T13:20:13.753065317Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 712.205962ms" Jan 14 13:20:13.873233 kubelet[3039]: W0114 13:20:13.873151 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.873233 kubelet[3039]: E0114 13:20:13.873209 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.938955 kubelet[3039]: W0114 13:20:13.938906 3039 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.938955 kubelet[3039]: E0114 13:20:13.938956 3039 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:13.972848 kubelet[3039]: E0114 13:20:13.972716 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ae9609fe4e?timeout=10s\": dial tcp 10.200.4.47:6443: connect: connection refused" interval="1.6s" Jan 14 13:20:14.079340 kubelet[3039]: I0114 13:20:14.079286 3039 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:14.079691 kubelet[3039]: E0114 13:20:14.079668 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.47:6443/api/v1/nodes\": dial tcp 10.200.4.47:6443: connect: connection refused" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:14.337597 containerd[1713]: time="2025-01-14T13:20:14.335054748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:14.337597 containerd[1713]: time="2025-01-14T13:20:14.335121850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:14.337597 containerd[1713]: time="2025-01-14T13:20:14.335143150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:14.337597 containerd[1713]: time="2025-01-14T13:20:14.336741695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:14.340593 containerd[1713]: time="2025-01-14T13:20:14.334392129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:14.340593 containerd[1713]: time="2025-01-14T13:20:14.340155590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:14.340593 containerd[1713]: time="2025-01-14T13:20:14.340176491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:14.340593 containerd[1713]: time="2025-01-14T13:20:14.340275194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:14.343335 containerd[1713]: time="2025-01-14T13:20:14.342871166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:14.343335 containerd[1713]: time="2025-01-14T13:20:14.342939868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:14.343335 containerd[1713]: time="2025-01-14T13:20:14.342960168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:14.343335 containerd[1713]: time="2025-01-14T13:20:14.343087372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:14.367951 systemd[1]: Started cri-containerd-3f94f27e494c1c55420a8f55612fe9adc25ae782abd54afdd581493a91445da9.scope - libcontainer container 3f94f27e494c1c55420a8f55612fe9adc25ae782abd54afdd581493a91445da9. Jan 14 13:20:14.376897 systemd[1]: Started cri-containerd-f85121342a10de74e16c11892d2c77b00aaefba0ce1f5d7e889ee40744b2766b.scope - libcontainer container f85121342a10de74e16c11892d2c77b00aaefba0ce1f5d7e889ee40744b2766b. Jan 14 13:20:14.391814 systemd[1]: Started cri-containerd-221d340deb8018e0728463bcc63f4668e0f6ddf347ca0080bda60e7be919c241.scope - libcontainer container 221d340deb8018e0728463bcc63f4668e0f6ddf347ca0080bda60e7be919c241. Jan 14 13:20:14.466848 containerd[1713]: time="2025-01-14T13:20:14.466802822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-ae9609fe4e,Uid:e06af29120d1526a46702bccfd2cd2f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f85121342a10de74e16c11892d2c77b00aaefba0ce1f5d7e889ee40744b2766b\"" Jan 14 13:20:14.471634 containerd[1713]: time="2025-01-14T13:20:14.470159716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-ae9609fe4e,Uid:0ee4eb97b560217fbc1ab50f4cc5549d,Namespace:kube-system,Attempt:0,} returns sandbox id \"221d340deb8018e0728463bcc63f4668e0f6ddf347ca0080bda60e7be919c241\"" Jan 14 13:20:14.478774 containerd[1713]: time="2025-01-14T13:20:14.478735855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-ae9609fe4e,Uid:2d322d43db32ac4cc21c6f5effa6a133,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f94f27e494c1c55420a8f55612fe9adc25ae782abd54afdd581493a91445da9\"" Jan 14 13:20:14.484507 containerd[1713]: time="2025-01-14T13:20:14.484467315Z" level=info msg="CreateContainer within sandbox \"221d340deb8018e0728463bcc63f4668e0f6ddf347ca0080bda60e7be919c241\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:20:14.485671 containerd[1713]: time="2025-01-14T13:20:14.485635447Z" level=info msg="CreateContainer within sandbox \"f85121342a10de74e16c11892d2c77b00aaefba0ce1f5d7e889ee40744b2766b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:20:14.508228 containerd[1713]: time="2025-01-14T13:20:14.508185676Z" level=info msg="CreateContainer within sandbox \"3f94f27e494c1c55420a8f55612fe9adc25ae782abd54afdd581493a91445da9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:20:14.550935 containerd[1713]: time="2025-01-14T13:20:14.550879267Z" level=info msg="CreateContainer within sandbox \"221d340deb8018e0728463bcc63f4668e0f6ddf347ca0080bda60e7be919c241\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f35bf7d7b5749c649fa1456c9c1ecc42e117fe5706423a953ab1ed9c330ad574\"" Jan 14 13:20:14.551698 containerd[1713]: time="2025-01-14T13:20:14.551663489Z" level=info msg="StartContainer for \"f35bf7d7b5749c649fa1456c9c1ecc42e117fe5706423a953ab1ed9c330ad574\"" Jan 14 13:20:14.583809 systemd[1]: Started cri-containerd-f35bf7d7b5749c649fa1456c9c1ecc42e117fe5706423a953ab1ed9c330ad574.scope - libcontainer container f35bf7d7b5749c649fa1456c9c1ecc42e117fe5706423a953ab1ed9c330ad574. Jan 14 13:20:14.592896 containerd[1713]: time="2025-01-14T13:20:14.590883783Z" level=info msg="CreateContainer within sandbox \"3f94f27e494c1c55420a8f55612fe9adc25ae782abd54afdd581493a91445da9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"084c7b374847f5b29bc97d539db4b07642ee98f90446c6589ad032eda4dfb4f9\"" Jan 14 13:20:14.592896 containerd[1713]: time="2025-01-14T13:20:14.591483199Z" level=info msg="StartContainer for \"084c7b374847f5b29bc97d539db4b07642ee98f90446c6589ad032eda4dfb4f9\"" Jan 14 13:20:14.595631 containerd[1713]: time="2025-01-14T13:20:14.594433982Z" level=info msg="CreateContainer within sandbox \"f85121342a10de74e16c11892d2c77b00aaefba0ce1f5d7e889ee40744b2766b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4e12991608141fcccfbc71573b404d57187b9517563cc104b0927827de488cd0\"" Jan 14 13:20:14.596127 containerd[1713]: time="2025-01-14T13:20:14.596101828Z" level=info msg="StartContainer for \"4e12991608141fcccfbc71573b404d57187b9517563cc104b0927827de488cd0\"" Jan 14 13:20:14.649853 systemd[1]: Started cri-containerd-4e12991608141fcccfbc71573b404d57187b9517563cc104b0927827de488cd0.scope - libcontainer container 4e12991608141fcccfbc71573b404d57187b9517563cc104b0927827de488cd0. Jan 14 13:20:14.689008 systemd[1]: Started cri-containerd-084c7b374847f5b29bc97d539db4b07642ee98f90446c6589ad032eda4dfb4f9.scope - libcontainer container 084c7b374847f5b29bc97d539db4b07642ee98f90446c6589ad032eda4dfb4f9. Jan 14 13:20:14.692174 kubelet[3039]: E0114 13:20:14.691179 3039 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.47:6443: connect: connection refused Jan 14 13:20:14.822384 containerd[1713]: time="2025-01-14T13:20:14.822328237Z" level=info msg="StartContainer for \"084c7b374847f5b29bc97d539db4b07642ee98f90446c6589ad032eda4dfb4f9\" returns successfully" Jan 14 13:20:14.822650 containerd[1713]: time="2025-01-14T13:20:14.822350338Z" level=info msg="StartContainer for \"4e12991608141fcccfbc71573b404d57187b9517563cc104b0927827de488cd0\" returns successfully" Jan 14 13:20:14.822650 containerd[1713]: time="2025-01-14T13:20:14.822358238Z" level=info msg="StartContainer for \"f35bf7d7b5749c649fa1456c9c1ecc42e117fe5706423a953ab1ed9c330ad574\" returns successfully" Jan 14 13:20:16.310738 kubelet[3039]: I0114 13:20:16.310704 3039 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:17.151880 kubelet[3039]: E0114 13:20:17.151820 3039 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.0-a-ae9609fe4e\" not found" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:18.253067 kubelet[3039]: E0114 13:20:18.252484 3039 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.0-a-ae9609fe4e.181a91b4eca8edc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-ae9609fe4e,UID:ci-4152.2.0-a-ae9609fe4e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-ae9609fe4e,},FirstTimestamp:2025-01-14 13:20:12.555382216 +0000 UTC m=+0.640118447,LastTimestamp:2025-01-14 13:20:12.555382216 +0000 UTC m=+0.640118447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-ae9609fe4e,}" Jan 14 13:20:18.255445 kubelet[3039]: I0114 13:20:18.255160 3039 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:18.276637 kubelet[3039]: W0114 13:20:18.275423 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:18.276637 kubelet[3039]: W0114 13:20:18.276195 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:18.276637 kubelet[3039]: W0114 13:20:18.276252 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:19.251885 kubelet[3039]: I0114 13:20:19.251821 3039 apiserver.go:52] "Watching apiserver" Jan 14 13:20:19.266705 kubelet[3039]: W0114 13:20:19.266669 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:19.267174 kubelet[3039]: E0114 13:20:19.266791 3039 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:19.267873 kubelet[3039]: I0114 13:20:19.267839 3039 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:20:20.168279 systemd[1]: Reloading requested from client PID 3317 ('systemctl') (unit session-9.scope)... Jan 14 13:20:20.168296 systemd[1]: Reloading... Jan 14 13:20:20.251638 zram_generator::config[3356]: No configuration found. Jan 14 13:20:20.392980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:20:20.489933 systemd[1]: Reloading finished in 321 ms. Jan 14 13:20:20.535513 kubelet[3039]: I0114 13:20:20.535047 3039 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:20:20.535246 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:20:20.543689 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:20:20.543956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:20:20.548213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:20:20.652304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:20:20.658969 (kubelet)[3424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:20:20.715114 kubelet[3424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:20:20.715114 kubelet[3424]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:20:20.715114 kubelet[3424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:20:20.715730 kubelet[3424]: I0114 13:20:20.715174 3424 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:20:20.721623 kubelet[3424]: I0114 13:20:20.721589 3424 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:20:20.722563 kubelet[3424]: I0114 13:20:20.722047 3424 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:20:20.722563 kubelet[3424]: I0114 13:20:20.722352 3424 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:20:20.724311 kubelet[3424]: I0114 13:20:20.724285 3424 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:20:20.729336 kubelet[3424]: I0114 13:20:20.728502 3424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:20:20.741703 kubelet[3424]: I0114 13:20:20.741273 3424 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:20:20.741703 kubelet[3424]: I0114 13:20:20.741524 3424 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:20:20.743049 kubelet[3424]: I0114 13:20:20.743018 3424 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:20:20.743049 kubelet[3424]: I0114 13:20:20.743063 3424 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:20:20.743049 kubelet[3424]: I0114 13:20:20.743079 3424 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:20:20.743049 kubelet[3424]: I0114 13:20:20.743126 3424 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:20:20.745417 kubelet[3424]: I0114 13:20:20.743248 3424 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:20:20.745417 kubelet[3424]: I0114 13:20:20.743269 3424 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:20:20.745417 kubelet[3424]: I0114 13:20:20.743300 3424 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:20:20.745417 kubelet[3424]: I0114 13:20:20.743318 3424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:20:20.746325 kubelet[3424]: I0114 13:20:20.746307 3424 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:20:20.746532 kubelet[3424]: I0114 13:20:20.746516 3424 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:20:20.747075 kubelet[3424]: I0114 13:20:20.747034 3424 server.go:1256] "Started kubelet" Jan 14 13:20:20.752290 kubelet[3424]: I0114 13:20:20.752257 3424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:20:20.765629 kubelet[3424]: I0114 13:20:20.764101 3424 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:20:20.765629 kubelet[3424]: I0114 13:20:20.765160 3424 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:20:20.767699 kubelet[3424]: I0114 13:20:20.767678 3424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:20:20.767889 kubelet[3424]: I0114 13:20:20.767873 3424 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:20:20.772336 kubelet[3424]: I0114 13:20:20.772126 3424 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:20:20.778391 kubelet[3424]: I0114 13:20:20.778369 3424 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:20:20.778918 kubelet[3424]: I0114 13:20:20.778902 3424 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:20:20.782464 kubelet[3424]: I0114 13:20:20.782436 3424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:20:20.784256 kubelet[3424]: I0114 13:20:20.784232 3424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:20:20.784336 kubelet[3424]: I0114 13:20:20.784277 3424 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:20:20.784336 kubelet[3424]: I0114 13:20:20.784298 3424 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:20:20.784421 kubelet[3424]: E0114 13:20:20.784354 3424 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:20:20.791585 kubelet[3424]: I0114 13:20:20.791557 3424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:20:20.793599 kubelet[3424]: E0114 13:20:20.793421 3424 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:20:20.795208 kubelet[3424]: I0114 13:20:20.795075 3424 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:20:20.795208 kubelet[3424]: I0114 13:20:20.795095 3424 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:20:20.839667 kubelet[3424]: I0114 13:20:20.839501 3424 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:20:20.839667 kubelet[3424]: I0114 13:20:20.839521 3424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:20:20.839667 kubelet[3424]: I0114 13:20:20.839543 3424 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:20:20.839984 kubelet[3424]: I0114 13:20:20.839973 3424 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:20:20.840127 kubelet[3424]: I0114 13:20:20.840044 3424 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:20:20.840127 kubelet[3424]: I0114 13:20:20.840056 3424 policy_none.go:49] "None policy: Start" Jan 14 13:20:20.841528 kubelet[3424]: I0114 13:20:20.840782 3424 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:20:20.841528 kubelet[3424]: I0114 13:20:20.840817 3424 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:20:20.841528 kubelet[3424]: I0114 13:20:20.840968 3424 state_mem.go:75] "Updated machine memory state" Jan 14 13:20:20.844961 kubelet[3424]: I0114 13:20:20.844948 3424 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:20:20.845554 kubelet[3424]: I0114 13:20:20.845540 3424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:20:20.875524 kubelet[3424]: I0114 13:20:20.875489 3424 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.884807 kubelet[3424]: I0114 13:20:20.884774 3424 topology_manager.go:215] "Topology Admit Handler" podUID="2d322d43db32ac4cc21c6f5effa6a133" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.884928 kubelet[3424]: I0114 13:20:20.884906 3424 topology_manager.go:215] "Topology Admit Handler" podUID="e06af29120d1526a46702bccfd2cd2f8" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.885461 kubelet[3424]: I0114 13:20:20.885031 3424 topology_manager.go:215] "Topology Admit Handler" podUID="0ee4eb97b560217fbc1ab50f4cc5549d" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.897410 kubelet[3424]: W0114 13:20:20.897387 3424 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:20.897566 kubelet[3424]: W0114 13:20:20.897387 3424 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:20.897689 kubelet[3424]: E0114 13:20:20.897676 3424 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.897850 kubelet[3424]: W0114 13:20:20.897410 3424 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:20.897850 kubelet[3424]: E0114 13:20:20.897839 3424 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.898016 kubelet[3424]: I0114 13:20:20.897544 3424 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.898016 kubelet[3424]: E0114 13:20:20.897626 3424 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152.2.0-a-ae9609fe4e\" already exists" pod="kube-system/kube-scheduler-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.898016 kubelet[3424]: I0114 13:20:20.897942 3424 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.979804 kubelet[3424]: I0114 13:20:20.979748 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.979804 kubelet[3424]: I0114 13:20:20.979809 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.980065 kubelet[3424]: I0114 13:20:20.979849 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.980065 kubelet[3424]: I0114 13:20:20.979883 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.980065 kubelet[3424]: I0114 13:20:20.979953 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d322d43db32ac4cc21c6f5effa6a133-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" (UID: \"2d322d43db32ac4cc21c6f5effa6a133\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.980065 kubelet[3424]: I0114 13:20:20.979992 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d322d43db32ac4cc21c6f5effa6a133-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" (UID: \"2d322d43db32ac4cc21c6f5effa6a133\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.980065 kubelet[3424]: I0114 13:20:20.980029 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ee4eb97b560217fbc1ab50f4cc5549d-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-ae9609fe4e\" (UID: \"0ee4eb97b560217fbc1ab50f4cc5549d\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.980320 kubelet[3424]: I0114 13:20:20.980063 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d322d43db32ac4cc21c6f5effa6a133-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" (UID: \"2d322d43db32ac4cc21c6f5effa6a133\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:20.980320 kubelet[3424]: I0114 13:20:20.980101 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e06af29120d1526a46702bccfd2cd2f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-ae9609fe4e\" (UID: \"e06af29120d1526a46702bccfd2cd2f8\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:21.175313 sudo[3453]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 14 13:20:21.175699 sudo[3453]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 14 13:20:21.711317 sudo[3453]: pam_unix(sudo:session): session closed for user root Jan 14 13:20:21.745332 kubelet[3424]: I0114 13:20:21.744051 3424 apiserver.go:52] "Watching apiserver" Jan 14 13:20:21.778938 kubelet[3424]: I0114 13:20:21.778858 3424 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:20:21.838813 kubelet[3424]: W0114 13:20:21.837208 3424 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:20:21.838813 kubelet[3424]: E0114 13:20:21.837277 3424 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.0-a-ae9609fe4e\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" Jan 14 13:20:21.863353 kubelet[3424]: I0114 13:20:21.863231 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ae9609fe4e" podStartSLOduration=3.863180595 podStartE2EDuration="3.863180595s" podCreationTimestamp="2025-01-14 13:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:20:21.862738982 +0000 UTC m=+1.196626534" watchObservedRunningTime="2025-01-14 13:20:21.863180595 +0000 UTC m=+1.197068147" Jan 14 13:20:21.904562 kubelet[3424]: I0114 13:20:21.904098 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.0-a-ae9609fe4e" podStartSLOduration=3.904049264 podStartE2EDuration="3.904049264s" podCreationTimestamp="2025-01-14 13:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:20:21.878531134 +0000 UTC m=+1.212418686" watchObservedRunningTime="2025-01-14 13:20:21.904049264 +0000 UTC m=+1.237936816" Jan 14 13:20:21.904562 kubelet[3424]: I0114 13:20:21.904249 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.0-a-ae9609fe4e" podStartSLOduration=3.904225669 podStartE2EDuration="3.904225669s" podCreationTimestamp="2025-01-14 13:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:20:21.903485148 +0000 UTC m=+1.237372800" watchObservedRunningTime="2025-01-14 13:20:21.904225669 +0000 UTC m=+1.238113221" Jan 14 13:20:23.320386 sudo[2434]: pam_unix(sudo:session): session closed for user root Jan 14 13:20:23.420497 sshd[2433]: Connection closed by 10.200.16.10 port 53136 Jan 14 13:20:23.421223 sshd-session[2431]: pam_unix(sshd:session): session closed for user core Jan 14 13:20:23.425573 systemd[1]: sshd@6-10.200.4.47:22-10.200.16.10:53136.service: Deactivated successfully. Jan 14 13:20:23.427556 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:20:23.427796 systemd[1]: session-9.scope: Consumed 4.646s CPU time, 187.5M memory peak, 0B memory swap peak. Jan 14 13:20:23.428444 systemd-logind[1684]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:20:23.429546 systemd-logind[1684]: Removed session 9. Jan 14 13:20:33.211643 kubelet[3424]: I0114 13:20:33.211590 3424 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:20:33.212412 kubelet[3424]: I0114 13:20:33.212363 3424 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:20:33.212494 containerd[1713]: time="2025-01-14T13:20:33.212122175Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:20:34.060994 kubelet[3424]: I0114 13:20:34.059934 3424 topology_manager.go:215] "Topology Admit Handler" podUID="d53f8a3f-b093-4949-80d5-de3a37614046" podNamespace="kube-system" podName="cilium-operator-5cc964979-6jj8j" Jan 14 13:20:34.065133 kubelet[3424]: W0114 13:20:34.065081 3424 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152.2.0-a-ae9609fe4e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-ae9609fe4e' and this object Jan 14 13:20:34.065133 kubelet[3424]: E0114 13:20:34.065138 3424 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152.2.0-a-ae9609fe4e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-ae9609fe4e' and this object Jan 14 13:20:34.065317 kubelet[3424]: W0114 13:20:34.065189 3424 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152.2.0-a-ae9609fe4e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-ae9609fe4e' and this object Jan 14 13:20:34.065317 kubelet[3424]: E0114 13:20:34.065205 3424 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152.2.0-a-ae9609fe4e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-ae9609fe4e' and this object Jan 14 13:20:34.073011 systemd[1]: Created slice kubepods-besteffort-podd53f8a3f_b093_4949_80d5_de3a37614046.slice - libcontainer container kubepods-besteffort-podd53f8a3f_b093_4949_80d5_de3a37614046.slice. Jan 14 13:20:34.208400 kubelet[3424]: I0114 13:20:34.208343 3424 topology_manager.go:215] "Topology Admit Handler" podUID="b02349cb-1e13-4c38-8b74-f12de4d61752" podNamespace="kube-system" podName="kube-proxy-plcrd" Jan 14 13:20:34.217332 systemd[1]: Created slice kubepods-besteffort-podb02349cb_1e13_4c38_8b74_f12de4d61752.slice - libcontainer container kubepods-besteffort-podb02349cb_1e13_4c38_8b74_f12de4d61752.slice. Jan 14 13:20:34.238600 kubelet[3424]: I0114 13:20:34.238561 3424 topology_manager.go:215] "Topology Admit Handler" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" podNamespace="kube-system" podName="cilium-tzmql" Jan 14 13:20:34.251094 systemd[1]: Created slice kubepods-burstable-pod6bc4c6e2_6ac4_448f_a610_1d7a138ae1d6.slice - libcontainer container kubepods-burstable-pod6bc4c6e2_6ac4_448f_a610_1d7a138ae1d6.slice. Jan 14 13:20:34.255026 kubelet[3424]: I0114 13:20:34.255001 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshgx\" (UniqueName: \"kubernetes.io/projected/d53f8a3f-b093-4949-80d5-de3a37614046-kube-api-access-tshgx\") pod \"cilium-operator-5cc964979-6jj8j\" (UID: \"d53f8a3f-b093-4949-80d5-de3a37614046\") " pod="kube-system/cilium-operator-5cc964979-6jj8j" Jan 14 13:20:34.255374 kubelet[3424]: I0114 13:20:34.255046 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d53f8a3f-b093-4949-80d5-de3a37614046-cilium-config-path\") pod \"cilium-operator-5cc964979-6jj8j\" (UID: \"d53f8a3f-b093-4949-80d5-de3a37614046\") " pod="kube-system/cilium-operator-5cc964979-6jj8j" Jan 14 13:20:34.356733 kubelet[3424]: I0114 13:20:34.355917 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b02349cb-1e13-4c38-8b74-f12de4d61752-kube-proxy\") pod \"kube-proxy-plcrd\" (UID: \"b02349cb-1e13-4c38-8b74-f12de4d61752\") " pod="kube-system/kube-proxy-plcrd" Jan 14 13:20:34.356733 kubelet[3424]: I0114 13:20:34.355973 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-net\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.356733 kubelet[3424]: I0114 13:20:34.356011 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b02349cb-1e13-4c38-8b74-f12de4d61752-lib-modules\") pod \"kube-proxy-plcrd\" (UID: \"b02349cb-1e13-4c38-8b74-f12de4d61752\") " pod="kube-system/kube-proxy-plcrd" Jan 14 13:20:34.356733 kubelet[3424]: I0114 13:20:34.356046 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-cgroup\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.356733 kubelet[3424]: I0114 13:20:34.356085 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-config-path\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.356733 kubelet[3424]: I0114 13:20:34.356120 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-run\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357152 kubelet[3424]: I0114 13:20:34.356154 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-lib-modules\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357152 kubelet[3424]: I0114 13:20:34.356187 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hubble-tls\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357152 kubelet[3424]: I0114 13:20:34.356223 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-bpf-maps\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357152 kubelet[3424]: I0114 13:20:34.356255 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hostproc\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357152 kubelet[3424]: I0114 13:20:34.356292 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-kernel\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357152 kubelet[3424]: I0114 13:20:34.356328 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-clustermesh-secrets\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357466 kubelet[3424]: I0114 13:20:34.356363 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7k6\" (UniqueName: \"kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-kube-api-access-sq7k6\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357466 kubelet[3424]: I0114 13:20:34.356397 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b02349cb-1e13-4c38-8b74-f12de4d61752-xtables-lock\") pod \"kube-proxy-plcrd\" (UID: \"b02349cb-1e13-4c38-8b74-f12de4d61752\") " pod="kube-system/kube-proxy-plcrd" Jan 14 13:20:34.357466 kubelet[3424]: I0114 13:20:34.356434 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9kfw\" (UniqueName: \"kubernetes.io/projected/b02349cb-1e13-4c38-8b74-f12de4d61752-kube-api-access-s9kfw\") pod \"kube-proxy-plcrd\" (UID: \"b02349cb-1e13-4c38-8b74-f12de4d61752\") " pod="kube-system/kube-proxy-plcrd" Jan 14 13:20:34.357466 kubelet[3424]: I0114 13:20:34.356468 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-etc-cni-netd\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.357466 kubelet[3424]: I0114 13:20:34.356504 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-xtables-lock\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:34.358918 kubelet[3424]: I0114 13:20:34.356617 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cni-path\") pod \"cilium-tzmql\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " pod="kube-system/cilium-tzmql" Jan 14 13:20:35.364664 kubelet[3424]: E0114 13:20:35.364617 3424 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.364664 kubelet[3424]: E0114 13:20:35.364665 3424 projected.go:200] Error preparing data for projected volume kube-api-access-tshgx for pod kube-system/cilium-operator-5cc964979-6jj8j: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.365268 kubelet[3424]: E0114 13:20:35.364758 3424 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d53f8a3f-b093-4949-80d5-de3a37614046-kube-api-access-tshgx podName:d53f8a3f-b093-4949-80d5-de3a37614046 nodeName:}" failed. No retries permitted until 2025-01-14 13:20:35.864729122 +0000 UTC m=+15.198616774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tshgx" (UniqueName: "kubernetes.io/projected/d53f8a3f-b093-4949-80d5-de3a37614046-kube-api-access-tshgx") pod "cilium-operator-5cc964979-6jj8j" (UID: "d53f8a3f-b093-4949-80d5-de3a37614046") : failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.473373 kubelet[3424]: E0114 13:20:35.473318 3424 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.473373 kubelet[3424]: E0114 13:20:35.473369 3424 projected.go:200] Error preparing data for projected volume kube-api-access-s9kfw for pod kube-system/kube-proxy-plcrd: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.473678 kubelet[3424]: E0114 13:20:35.473458 3424 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b02349cb-1e13-4c38-8b74-f12de4d61752-kube-api-access-s9kfw podName:b02349cb-1e13-4c38-8b74-f12de4d61752 nodeName:}" failed. No retries permitted until 2025-01-14 13:20:35.973430845 +0000 UTC m=+15.307318497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s9kfw" (UniqueName: "kubernetes.io/projected/b02349cb-1e13-4c38-8b74-f12de4d61752-kube-api-access-s9kfw") pod "kube-proxy-plcrd" (UID: "b02349cb-1e13-4c38-8b74-f12de4d61752") : failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.474561 kubelet[3424]: E0114 13:20:35.474482 3424 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.474710 kubelet[3424]: E0114 13:20:35.474591 3424 projected.go:200] Error preparing data for projected volume kube-api-access-sq7k6 for pod kube-system/cilium-tzmql: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:35.474710 kubelet[3424]: E0114 13:20:35.474677 3424 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-kube-api-access-sq7k6 podName:6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6 nodeName:}" failed. No retries permitted until 2025-01-14 13:20:35.97465578 +0000 UTC m=+15.308543332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sq7k6" (UniqueName: "kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-kube-api-access-sq7k6") pod "cilium-tzmql" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6") : failed to sync configmap cache: timed out waiting for the condition Jan 14 13:20:36.183843 containerd[1713]: time="2025-01-14T13:20:36.183774154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-6jj8j,Uid:d53f8a3f-b093-4949-80d5-de3a37614046,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:36.233863 containerd[1713]: time="2025-01-14T13:20:36.233757390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:36.234543 containerd[1713]: time="2025-01-14T13:20:36.234481511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:36.234713 containerd[1713]: time="2025-01-14T13:20:36.234632415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:36.234940 containerd[1713]: time="2025-01-14T13:20:36.234889022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:36.261756 systemd[1]: Started cri-containerd-7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a.scope - libcontainer container 7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a. Jan 14 13:20:36.301499 containerd[1713]: time="2025-01-14T13:20:36.301452335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-6jj8j,Uid:d53f8a3f-b093-4949-80d5-de3a37614046,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a\"" Jan 14 13:20:36.304808 containerd[1713]: time="2025-01-14T13:20:36.304766330Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 14 13:20:36.322422 containerd[1713]: time="2025-01-14T13:20:36.322375436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-plcrd,Uid:b02349cb-1e13-4c38-8b74-f12de4d61752,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:36.358668 containerd[1713]: time="2025-01-14T13:20:36.357866856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzmql,Uid:6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:36.369567 containerd[1713]: time="2025-01-14T13:20:36.368892572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:36.369825 containerd[1713]: time="2025-01-14T13:20:36.369541891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:36.369825 containerd[1713]: time="2025-01-14T13:20:36.369683895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:36.369825 containerd[1713]: time="2025-01-14T13:20:36.369777598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:36.388812 systemd[1]: Started cri-containerd-48daa94b398e0e89685ff981ef63d24b35bb9845018c27a48126079e5f7637c1.scope - libcontainer container 48daa94b398e0e89685ff981ef63d24b35bb9845018c27a48126079e5f7637c1. Jan 14 13:20:36.414551 containerd[1713]: time="2025-01-14T13:20:36.414118272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:36.414551 containerd[1713]: time="2025-01-14T13:20:36.414194774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:36.414551 containerd[1713]: time="2025-01-14T13:20:36.414228575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:36.415521 containerd[1713]: time="2025-01-14T13:20:36.415153101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:36.424778 containerd[1713]: time="2025-01-14T13:20:36.424725076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-plcrd,Uid:b02349cb-1e13-4c38-8b74-f12de4d61752,Namespace:kube-system,Attempt:0,} returns sandbox id \"48daa94b398e0e89685ff981ef63d24b35bb9845018c27a48126079e5f7637c1\"" Jan 14 13:20:36.429131 containerd[1713]: time="2025-01-14T13:20:36.428929197Z" level=info msg="CreateContainer within sandbox \"48daa94b398e0e89685ff981ef63d24b35bb9845018c27a48126079e5f7637c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:20:36.442810 systemd[1]: Started cri-containerd-19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c.scope - libcontainer container 19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c. Jan 14 13:20:36.475135 containerd[1713]: time="2025-01-14T13:20:36.475031922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzmql,Uid:6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\"" Jan 14 13:20:36.499829 containerd[1713]: time="2025-01-14T13:20:36.499774174Z" level=info msg="CreateContainer within sandbox \"48daa94b398e0e89685ff981ef63d24b35bb9845018c27a48126079e5f7637c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7839ec3a0ea126d81cd8b322a4e0bfabbd4405e450021f0662f2e54b7072962b\"" Jan 14 13:20:36.500501 containerd[1713]: time="2025-01-14T13:20:36.500453387Z" level=info msg="StartContainer for \"7839ec3a0ea126d81cd8b322a4e0bfabbd4405e450021f0662f2e54b7072962b\"" Jan 14 13:20:36.529799 systemd[1]: Started cri-containerd-7839ec3a0ea126d81cd8b322a4e0bfabbd4405e450021f0662f2e54b7072962b.scope - libcontainer container 7839ec3a0ea126d81cd8b322a4e0bfabbd4405e450021f0662f2e54b7072962b. Jan 14 13:20:36.567978 containerd[1713]: time="2025-01-14T13:20:36.567927997Z" level=info msg="StartContainer for \"7839ec3a0ea126d81cd8b322a4e0bfabbd4405e450021f0662f2e54b7072962b\" returns successfully" Jan 14 13:20:36.861579 kubelet[3424]: I0114 13:20:36.860649 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-plcrd" podStartSLOduration=2.860587781 podStartE2EDuration="2.860587781s" podCreationTimestamp="2025-01-14 13:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:20:36.860259174 +0000 UTC m=+16.194146826" watchObservedRunningTime="2025-01-14 13:20:36.860587781 +0000 UTC m=+16.194475433" Jan 14 13:20:38.129353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4065982636.mount: Deactivated successfully. Jan 14 13:20:38.851005 containerd[1713]: time="2025-01-14T13:20:38.850951633Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:38.853210 containerd[1713]: time="2025-01-14T13:20:38.853161276Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Jan 14 13:20:38.856413 containerd[1713]: time="2025-01-14T13:20:38.856357838Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:38.858324 containerd[1713]: time="2025-01-14T13:20:38.858293676Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.553485445s" Jan 14 13:20:38.858415 containerd[1713]: time="2025-01-14T13:20:38.858326077Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 14 13:20:38.860542 containerd[1713]: time="2025-01-14T13:20:38.860507419Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 14 13:20:38.860991 containerd[1713]: time="2025-01-14T13:20:38.860958128Z" level=info msg="CreateContainer within sandbox \"7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 14 13:20:38.899617 containerd[1713]: time="2025-01-14T13:20:38.899571878Z" level=info msg="CreateContainer within sandbox \"7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\"" Jan 14 13:20:38.901075 containerd[1713]: time="2025-01-14T13:20:38.900110388Z" level=info msg="StartContainer for \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\"" Jan 14 13:20:38.929801 systemd[1]: Started cri-containerd-9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce.scope - libcontainer container 9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce. Jan 14 13:20:38.960535 containerd[1713]: time="2025-01-14T13:20:38.960488061Z" level=info msg="StartContainer for \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\" returns successfully" Jan 14 13:20:43.789205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162885346.mount: Deactivated successfully. Jan 14 13:20:50.923322 containerd[1713]: time="2025-01-14T13:20:50.923258110Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:50.926759 containerd[1713]: time="2025-01-14T13:20:50.926686104Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734691" Jan 14 13:20:50.933748 containerd[1713]: time="2025-01-14T13:20:50.933703797Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:50.935720 containerd[1713]: time="2025-01-14T13:20:50.935528947Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.074978727s" Jan 14 13:20:50.935720 containerd[1713]: time="2025-01-14T13:20:50.935569448Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 14 13:20:50.938588 containerd[1713]: time="2025-01-14T13:20:50.938188520Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:20:50.972723 containerd[1713]: time="2025-01-14T13:20:50.972679868Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\"" Jan 14 13:20:50.973639 containerd[1713]: time="2025-01-14T13:20:50.973143081Z" level=info msg="StartContainer for \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\"" Jan 14 13:20:51.008790 systemd[1]: Started cri-containerd-f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2.scope - libcontainer container f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2. Jan 14 13:20:51.041015 containerd[1713]: time="2025-01-14T13:20:51.039546405Z" level=info msg="StartContainer for \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\" returns successfully" Jan 14 13:20:51.048336 systemd[1]: cri-containerd-f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2.scope: Deactivated successfully. Jan 14 13:20:51.073285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2-rootfs.mount: Deactivated successfully. Jan 14 13:20:51.900117 kubelet[3424]: I0114 13:20:51.900075 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-6jj8j" podStartSLOduration=15.344500445 podStartE2EDuration="17.900015244s" podCreationTimestamp="2025-01-14 13:20:34 +0000 UTC" firstStartedPulling="2025-01-14 13:20:36.303163784 +0000 UTC m=+15.637051336" lastFinishedPulling="2025-01-14 13:20:38.858678483 +0000 UTC m=+18.192566135" observedRunningTime="2025-01-14 13:20:39.868637897 +0000 UTC m=+19.202525449" watchObservedRunningTime="2025-01-14 13:20:51.900015244 +0000 UTC m=+31.233902796" Jan 14 13:20:55.221580 containerd[1713]: time="2025-01-14T13:20:55.221504543Z" level=info msg="shim disconnected" id=f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2 namespace=k8s.io Jan 14 13:20:55.222266 containerd[1713]: time="2025-01-14T13:20:55.221640947Z" level=warning msg="cleaning up after shim disconnected" id=f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2 namespace=k8s.io Jan 14 13:20:55.222266 containerd[1713]: time="2025-01-14T13:20:55.221658947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:55.233915 containerd[1713]: time="2025-01-14T13:20:55.233855264Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:20:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:20:55.894423 containerd[1713]: time="2025-01-14T13:20:55.893889426Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:20:55.929070 containerd[1713]: time="2025-01-14T13:20:55.929027440Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\"" Jan 14 13:20:55.929657 containerd[1713]: time="2025-01-14T13:20:55.929398149Z" level=info msg="StartContainer for \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\"" Jan 14 13:20:55.961760 systemd[1]: Started cri-containerd-e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d.scope - libcontainer container e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d. Jan 14 13:20:55.989334 containerd[1713]: time="2025-01-14T13:20:55.988400883Z" level=info msg="StartContainer for \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\" returns successfully" Jan 14 13:20:55.998785 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:20:55.999570 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:20:55.999792 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:20:56.009704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:20:56.009987 systemd[1]: cri-containerd-e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d.scope: Deactivated successfully. Jan 14 13:20:56.034053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d-rootfs.mount: Deactivated successfully. Jan 14 13:20:56.035477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:20:56.046637 containerd[1713]: time="2025-01-14T13:20:56.046541195Z" level=info msg="shim disconnected" id=e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d namespace=k8s.io Jan 14 13:20:56.046773 containerd[1713]: time="2025-01-14T13:20:56.046640598Z" level=warning msg="cleaning up after shim disconnected" id=e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d namespace=k8s.io Jan 14 13:20:56.046773 containerd[1713]: time="2025-01-14T13:20:56.046654998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:56.899123 containerd[1713]: time="2025-01-14T13:20:56.898921158Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:20:56.954950 containerd[1713]: time="2025-01-14T13:20:56.954904613Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\"" Jan 14 13:20:56.955944 containerd[1713]: time="2025-01-14T13:20:56.955596231Z" level=info msg="StartContainer for \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\"" Jan 14 13:20:56.991991 systemd[1]: Started cri-containerd-be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c.scope - libcontainer container be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c. Jan 14 13:20:57.021710 systemd[1]: cri-containerd-be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c.scope: Deactivated successfully. Jan 14 13:20:57.022369 containerd[1713]: time="2025-01-14T13:20:57.022279365Z" level=info msg="StartContainer for \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\" returns successfully" Jan 14 13:20:57.043322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c-rootfs.mount: Deactivated successfully. Jan 14 13:20:57.063032 containerd[1713]: time="2025-01-14T13:20:57.062941623Z" level=info msg="shim disconnected" id=be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c namespace=k8s.io Jan 14 13:20:57.063032 containerd[1713]: time="2025-01-14T13:20:57.063029425Z" level=warning msg="cleaning up after shim disconnected" id=be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c namespace=k8s.io Jan 14 13:20:57.063315 containerd[1713]: time="2025-01-14T13:20:57.063042125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:57.905940 containerd[1713]: time="2025-01-14T13:20:57.905756137Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:20:57.944149 containerd[1713]: time="2025-01-14T13:20:57.944103134Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\"" Jan 14 13:20:57.944688 containerd[1713]: time="2025-01-14T13:20:57.944657848Z" level=info msg="StartContainer for \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\"" Jan 14 13:20:57.980803 systemd[1]: Started cri-containerd-d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c.scope - libcontainer container d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c. Jan 14 13:20:58.007080 systemd[1]: cri-containerd-d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c.scope: Deactivated successfully. Jan 14 13:20:58.012705 containerd[1713]: time="2025-01-14T13:20:58.012312007Z" level=info msg="StartContainer for \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\" returns successfully" Jan 14 13:20:58.032162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c-rootfs.mount: Deactivated successfully. Jan 14 13:20:58.043814 containerd[1713]: time="2025-01-14T13:20:58.043748025Z" level=info msg="shim disconnected" id=d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c namespace=k8s.io Jan 14 13:20:58.043963 containerd[1713]: time="2025-01-14T13:20:58.043812026Z" level=warning msg="cleaning up after shim disconnected" id=d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c namespace=k8s.io Jan 14 13:20:58.043963 containerd[1713]: time="2025-01-14T13:20:58.043824527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:58.909263 containerd[1713]: time="2025-01-14T13:20:58.908908820Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:20:58.944275 containerd[1713]: time="2025-01-14T13:20:58.944229638Z" level=info msg="CreateContainer within sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\"" Jan 14 13:20:58.946095 containerd[1713]: time="2025-01-14T13:20:58.944794953Z" level=info msg="StartContainer for \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\"" Jan 14 13:20:58.974335 systemd[1]: run-containerd-runc-k8s.io-629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325-runc.yqGKuI.mount: Deactivated successfully. Jan 14 13:20:58.983771 systemd[1]: Started cri-containerd-629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325.scope - libcontainer container 629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325. Jan 14 13:20:59.013321 containerd[1713]: time="2025-01-14T13:20:59.013275233Z" level=info msg="StartContainer for \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\" returns successfully" Jan 14 13:20:59.129732 kubelet[3424]: I0114 13:20:59.129590 3424 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:20:59.171490 kubelet[3424]: I0114 13:20:59.169717 3424 topology_manager.go:215] "Topology Admit Handler" podUID="7eceda52-5919-4369-a15b-dbfd7780fe1a" podNamespace="kube-system" podName="coredns-76f75df574-pgbnd" Jan 14 13:20:59.182883 kubelet[3424]: W0114 13:20:59.181222 3424 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152.2.0-a-ae9609fe4e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-ae9609fe4e' and this object Jan 14 13:20:59.182424 systemd[1]: Created slice kubepods-burstable-pod7eceda52_5919_4369_a15b_dbfd7780fe1a.slice - libcontainer container kubepods-burstable-pod7eceda52_5919_4369_a15b_dbfd7780fe1a.slice. Jan 14 13:20:59.186554 kubelet[3424]: E0114 13:20:59.186113 3424 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152.2.0-a-ae9609fe4e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-ae9609fe4e' and this object Jan 14 13:20:59.188423 kubelet[3424]: I0114 13:20:59.187903 3424 topology_manager.go:215] "Topology Admit Handler" podUID="0f7f1359-317a-4220-b773-bc368a4e321f" podNamespace="kube-system" podName="coredns-76f75df574-9928g" Jan 14 13:20:59.201750 systemd[1]: Created slice kubepods-burstable-pod0f7f1359_317a_4220_b773_bc368a4e321f.slice - libcontainer container kubepods-burstable-pod0f7f1359_317a_4220_b773_bc368a4e321f.slice. Jan 14 13:20:59.234836 kubelet[3424]: I0114 13:20:59.234497 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxx6\" (UniqueName: \"kubernetes.io/projected/0f7f1359-317a-4220-b773-bc368a4e321f-kube-api-access-njxx6\") pod \"coredns-76f75df574-9928g\" (UID: \"0f7f1359-317a-4220-b773-bc368a4e321f\") " pod="kube-system/coredns-76f75df574-9928g" Jan 14 13:20:59.234836 kubelet[3424]: I0114 13:20:59.234554 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eceda52-5919-4369-a15b-dbfd7780fe1a-config-volume\") pod \"coredns-76f75df574-pgbnd\" (UID: \"7eceda52-5919-4369-a15b-dbfd7780fe1a\") " pod="kube-system/coredns-76f75df574-pgbnd" Jan 14 13:20:59.234836 kubelet[3424]: I0114 13:20:59.234587 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f7f1359-317a-4220-b773-bc368a4e321f-config-volume\") pod \"coredns-76f75df574-9928g\" (UID: \"0f7f1359-317a-4220-b773-bc368a4e321f\") " pod="kube-system/coredns-76f75df574-9928g" Jan 14 13:20:59.234836 kubelet[3424]: I0114 13:20:59.234636 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2xw\" (UniqueName: \"kubernetes.io/projected/7eceda52-5919-4369-a15b-dbfd7780fe1a-kube-api-access-ll2xw\") pod \"coredns-76f75df574-pgbnd\" (UID: \"7eceda52-5919-4369-a15b-dbfd7780fe1a\") " pod="kube-system/coredns-76f75df574-pgbnd" Jan 14 13:20:59.930165 kubelet[3424]: I0114 13:20:59.930121 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tzmql" podStartSLOduration=11.470501673 podStartE2EDuration="25.930066971s" podCreationTimestamp="2025-01-14 13:20:34 +0000 UTC" firstStartedPulling="2025-01-14 13:20:36.476477663 +0000 UTC m=+15.810365315" lastFinishedPulling="2025-01-14 13:20:50.936043061 +0000 UTC m=+30.269930613" observedRunningTime="2025-01-14 13:20:59.929691861 +0000 UTC m=+39.263579413" watchObservedRunningTime="2025-01-14 13:20:59.930066971 +0000 UTC m=+39.263954523" Jan 14 13:21:00.336410 kubelet[3424]: E0114 13:21:00.336231 3424 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:21:00.336410 kubelet[3424]: E0114 13:21:00.336395 3424 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f7f1359-317a-4220-b773-bc368a4e321f-config-volume podName:0f7f1359-317a-4220-b773-bc368a4e321f nodeName:}" failed. No retries permitted until 2025-01-14 13:21:00.836364835 +0000 UTC m=+40.170252487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0f7f1359-317a-4220-b773-bc368a4e321f-config-volume") pod "coredns-76f75df574-9928g" (UID: "0f7f1359-317a-4220-b773-bc368a4e321f") : failed to sync configmap cache: timed out waiting for the condition Jan 14 13:21:00.337126 kubelet[3424]: E0114 13:21:00.336754 3424 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 14 13:21:00.337126 kubelet[3424]: E0114 13:21:00.336827 3424 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7eceda52-5919-4369-a15b-dbfd7780fe1a-config-volume podName:7eceda52-5919-4369-a15b-dbfd7780fe1a nodeName:}" failed. No retries permitted until 2025-01-14 13:21:00.836807946 +0000 UTC m=+40.170695498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7eceda52-5919-4369-a15b-dbfd7780fe1a-config-volume") pod "coredns-76f75df574-pgbnd" (UID: "7eceda52-5919-4369-a15b-dbfd7780fe1a") : failed to sync configmap cache: timed out waiting for the condition Jan 14 13:21:00.994489 containerd[1713]: time="2025-01-14T13:21:00.994433818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgbnd,Uid:7eceda52-5919-4369-a15b-dbfd7780fe1a,Namespace:kube-system,Attempt:0,}" Jan 14 13:21:01.011218 containerd[1713]: time="2025-01-14T13:21:01.011170627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9928g,Uid:0f7f1359-317a-4220-b773-bc368a4e321f,Namespace:kube-system,Attempt:0,}" Jan 14 13:21:01.207532 systemd-networkd[1438]: cilium_host: Link UP Jan 14 13:21:01.207899 systemd-networkd[1438]: cilium_net: Link UP Jan 14 13:21:01.208107 systemd-networkd[1438]: cilium_net: Gained carrier Jan 14 13:21:01.208306 systemd-networkd[1438]: cilium_host: Gained carrier Jan 14 13:21:01.513379 systemd-networkd[1438]: cilium_vxlan: Link UP Jan 14 13:21:01.513391 systemd-networkd[1438]: cilium_vxlan: Gained carrier Jan 14 13:21:01.804654 kernel: NET: Registered PF_ALG protocol family Jan 14 13:21:01.868940 systemd-networkd[1438]: cilium_net: Gained IPv6LL Jan 14 13:21:01.997839 systemd-networkd[1438]: cilium_host: Gained IPv6LL Jan 14 13:21:02.521087 systemd-networkd[1438]: lxc_health: Link UP Jan 14 13:21:02.523309 systemd-networkd[1438]: lxc_health: Gained carrier Jan 14 13:21:03.089408 kernel: eth0: renamed from tmp3fa59 Jan 14 13:21:03.093931 systemd-networkd[1438]: lxcf54c3601cd66: Link UP Jan 14 13:21:03.097062 systemd-networkd[1438]: lxcf54c3601cd66: Gained carrier Jan 14 13:21:03.117222 systemd-networkd[1438]: lxc661436e68052: Link UP Jan 14 13:21:03.127739 kernel: eth0: renamed from tmp4bde5 Jan 14 13:21:03.132598 systemd-networkd[1438]: lxc661436e68052: Gained carrier Jan 14 13:21:03.212975 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Jan 14 13:21:03.596899 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jan 14 13:21:04.300824 systemd-networkd[1438]: lxc661436e68052: Gained IPv6LL Jan 14 13:21:04.812928 systemd-networkd[1438]: lxcf54c3601cd66: Gained IPv6LL Jan 14 13:21:06.869745 containerd[1713]: time="2025-01-14T13:21:06.868729367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:21:06.870448 containerd[1713]: time="2025-01-14T13:21:06.869762292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:21:06.870448 containerd[1713]: time="2025-01-14T13:21:06.869820994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:06.870448 containerd[1713]: time="2025-01-14T13:21:06.869953697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:06.913106 systemd[1]: Started cri-containerd-3fa593e6377d8252c5385cd560552f6742967ae1148b399d7f2d4147ba3bfa6f.scope - libcontainer container 3fa593e6377d8252c5385cd560552f6742967ae1148b399d7f2d4147ba3bfa6f. Jan 14 13:21:06.968634 containerd[1713]: time="2025-01-14T13:21:06.967178474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:21:06.968634 containerd[1713]: time="2025-01-14T13:21:06.967311678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:21:06.968634 containerd[1713]: time="2025-01-14T13:21:06.967354479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:06.968634 containerd[1713]: time="2025-01-14T13:21:06.967506782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:21:07.008586 systemd[1]: run-containerd-runc-k8s.io-4bde5252e3800ebbc041971c92ea22be0873d95f1fab7a041a06a4e460527837-runc.oC07A7.mount: Deactivated successfully. Jan 14 13:21:07.017149 systemd[1]: Started cri-containerd-4bde5252e3800ebbc041971c92ea22be0873d95f1fab7a041a06a4e460527837.scope - libcontainer container 4bde5252e3800ebbc041971c92ea22be0873d95f1fab7a041a06a4e460527837. Jan 14 13:21:07.038226 containerd[1713]: time="2025-01-14T13:21:07.038175710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgbnd,Uid:7eceda52-5919-4369-a15b-dbfd7780fe1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fa593e6377d8252c5385cd560552f6742967ae1148b399d7f2d4147ba3bfa6f\"" Jan 14 13:21:07.045215 containerd[1713]: time="2025-01-14T13:21:07.045032478Z" level=info msg="CreateContainer within sandbox \"3fa593e6377d8252c5385cd560552f6742967ae1148b399d7f2d4147ba3bfa6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:21:07.090977 containerd[1713]: time="2025-01-14T13:21:07.090922500Z" level=info msg="CreateContainer within sandbox \"3fa593e6377d8252c5385cd560552f6742967ae1148b399d7f2d4147ba3bfa6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12a4ce08274ba639c36ec4d609f7af150f8e8060334896c520f4486d95f01e04\"" Jan 14 13:21:07.091956 containerd[1713]: time="2025-01-14T13:21:07.091917125Z" level=info msg="StartContainer for \"12a4ce08274ba639c36ec4d609f7af150f8e8060334896c520f4486d95f01e04\"" Jan 14 13:21:07.104947 containerd[1713]: time="2025-01-14T13:21:07.104367129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9928g,Uid:0f7f1359-317a-4220-b773-bc368a4e321f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bde5252e3800ebbc041971c92ea22be0873d95f1fab7a041a06a4e460527837\"" Jan 14 13:21:07.113580 containerd[1713]: time="2025-01-14T13:21:07.112813836Z" level=info msg="CreateContainer within sandbox \"4bde5252e3800ebbc041971c92ea22be0873d95f1fab7a041a06a4e460527837\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:21:07.140791 systemd[1]: Started cri-containerd-12a4ce08274ba639c36ec4d609f7af150f8e8060334896c520f4486d95f01e04.scope - libcontainer container 12a4ce08274ba639c36ec4d609f7af150f8e8060334896c520f4486d95f01e04. Jan 14 13:21:07.156851 containerd[1713]: time="2025-01-14T13:21:07.156573306Z" level=info msg="CreateContainer within sandbox \"4bde5252e3800ebbc041971c92ea22be0873d95f1fab7a041a06a4e460527837\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b98470d8708db6e423be68c910d65ee3c28940fea8315347a26b2a7f023762e\"" Jan 14 13:21:07.160907 containerd[1713]: time="2025-01-14T13:21:07.158987465Z" level=info msg="StartContainer for \"2b98470d8708db6e423be68c910d65ee3c28940fea8315347a26b2a7f023762e\"" Jan 14 13:21:07.176912 containerd[1713]: time="2025-01-14T13:21:07.176752199Z" level=info msg="StartContainer for \"12a4ce08274ba639c36ec4d609f7af150f8e8060334896c520f4486d95f01e04\" returns successfully" Jan 14 13:21:07.199807 systemd[1]: Started cri-containerd-2b98470d8708db6e423be68c910d65ee3c28940fea8315347a26b2a7f023762e.scope - libcontainer container 2b98470d8708db6e423be68c910d65ee3c28940fea8315347a26b2a7f023762e. Jan 14 13:21:07.246635 containerd[1713]: time="2025-01-14T13:21:07.246529205Z" level=info msg="StartContainer for \"2b98470d8708db6e423be68c910d65ee3c28940fea8315347a26b2a7f023762e\" returns successfully" Jan 14 13:21:07.950969 kubelet[3424]: I0114 13:21:07.950517 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9928g" podStartSLOduration=33.950470219 podStartE2EDuration="33.950470219s" podCreationTimestamp="2025-01-14 13:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:21:07.950439819 +0000 UTC m=+47.284327471" watchObservedRunningTime="2025-01-14 13:21:07.950470219 +0000 UTC m=+47.284357871" Jan 14 13:21:07.966989 kubelet[3424]: I0114 13:21:07.966940 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pgbnd" podStartSLOduration=33.966883521 podStartE2EDuration="33.966883521s" podCreationTimestamp="2025-01-14 13:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:21:07.96642721 +0000 UTC m=+47.300314862" watchObservedRunningTime="2025-01-14 13:21:07.966883521 +0000 UTC m=+47.300771073" Jan 14 13:21:58.358714 update_engine[1688]: I20250114 13:21:58.358253 1688 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 13:21:58.358714 update_engine[1688]: I20250114 13:21:58.358338 1688 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 13:21:58.359340 update_engine[1688]: I20250114 13:21:58.358590 1688 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 13:21:58.359823 update_engine[1688]: I20250114 13:21:58.359791 1688 omaha_request_params.cc:62] Current group set to stable Jan 14 13:21:58.359952 update_engine[1688]: I20250114 13:21:58.359928 1688 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 13:21:58.359952 update_engine[1688]: I20250114 13:21:58.359943 1688 update_attempter.cc:643] Scheduling an action processor start. Jan 14 13:21:58.360039 update_engine[1688]: I20250114 13:21:58.359963 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 13:21:58.360039 update_engine[1688]: I20250114 13:21:58.360002 1688 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 13:21:58.360113 update_engine[1688]: I20250114 13:21:58.360088 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 13:21:58.360113 update_engine[1688]: I20250114 13:21:58.360100 1688 omaha_request_action.cc:272] Request: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: Jan 14 13:21:58.360113 update_engine[1688]: I20250114 13:21:58.360109 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:21:58.361357 locksmithd[1733]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 13:21:58.361907 update_engine[1688]: I20250114 13:21:58.361793 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:21:58.362183 update_engine[1688]: I20250114 13:21:58.362152 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:21:58.371240 update_engine[1688]: E20250114 13:21:58.371203 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:21:58.371322 update_engine[1688]: I20250114 13:21:58.371285 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 13:22:08.319398 update_engine[1688]: I20250114 13:22:08.319304 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:22:08.319943 update_engine[1688]: I20250114 13:22:08.319671 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:22:08.320207 update_engine[1688]: I20250114 13:22:08.320047 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:22:08.378082 update_engine[1688]: E20250114 13:22:08.378005 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:22:08.378293 update_engine[1688]: I20250114 13:22:08.378170 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 13:22:18.319360 update_engine[1688]: I20250114 13:22:18.319221 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:22:18.319976 update_engine[1688]: I20250114 13:22:18.319641 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:22:18.320043 update_engine[1688]: I20250114 13:22:18.319975 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:22:18.331889 update_engine[1688]: E20250114 13:22:18.331820 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:22:18.332032 update_engine[1688]: I20250114 13:22:18.331919 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 13:22:21.660912 systemd[1]: Started sshd@7-10.200.4.47:22-10.200.16.10:54214.service - OpenSSH per-connection server daemon (10.200.16.10:54214). Jan 14 13:22:22.272525 sshd[4803]: Accepted publickey for core from 10.200.16.10 port 54214 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:22.274154 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:22.279021 systemd-logind[1684]: New session 10 of user core. Jan 14 13:22:22.283770 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:22:22.778864 sshd[4805]: Connection closed by 10.200.16.10 port 54214 Jan 14 13:22:22.779952 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:22.784712 systemd[1]: sshd@7-10.200.4.47:22-10.200.16.10:54214.service: Deactivated successfully. Jan 14 13:22:22.789755 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:22:22.790804 systemd-logind[1684]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:22:22.792446 systemd-logind[1684]: Removed session 10. Jan 14 13:22:27.892924 systemd[1]: Started sshd@8-10.200.4.47:22-10.200.16.10:58912.service - OpenSSH per-connection server daemon (10.200.16.10:58912). Jan 14 13:22:28.312253 update_engine[1688]: I20250114 13:22:28.312188 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:22:28.312743 update_engine[1688]: I20250114 13:22:28.312465 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:22:28.312838 update_engine[1688]: I20250114 13:22:28.312803 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:22:28.320749 update_engine[1688]: E20250114 13:22:28.320701 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:22:28.320881 update_engine[1688]: I20250114 13:22:28.320780 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 13:22:28.320881 update_engine[1688]: I20250114 13:22:28.320793 1688 omaha_request_action.cc:617] Omaha request response: Jan 14 13:22:28.320959 update_engine[1688]: E20250114 13:22:28.320904 1688 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 13:22:28.320959 update_engine[1688]: I20250114 13:22:28.320933 1688 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 13:22:28.320959 update_engine[1688]: I20250114 13:22:28.320942 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 13:22:28.320959 update_engine[1688]: I20250114 13:22:28.320948 1688 update_attempter.cc:306] Processing Done. Jan 14 13:22:28.321163 update_engine[1688]: E20250114 13:22:28.320971 1688 update_attempter.cc:619] Update failed. Jan 14 13:22:28.321163 update_engine[1688]: I20250114 13:22:28.320978 1688 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 13:22:28.321163 update_engine[1688]: I20250114 13:22:28.320986 1688 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 13:22:28.321163 update_engine[1688]: I20250114 13:22:28.320995 1688 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 13:22:28.321163 update_engine[1688]: I20250114 13:22:28.321081 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 13:22:28.321163 update_engine[1688]: I20250114 13:22:28.321110 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 13:22:28.321163 update_engine[1688]: I20250114 13:22:28.321118 1688 omaha_request_action.cc:272] Request: Jan 14 13:22:28.321163 update_engine[1688]: Jan 14 13:22:28.321163 update_engine[1688]: Jan 14 13:22:28.321163 update_engine[1688]: Jan 14 13:22:28.321163 update_engine[1688]: Jan 14 13:22:28.321163 update_engine[1688]: Jan 14 13:22:28.321163 update_engine[1688]: Jan 14 13:22:28.321163 update_engine[1688]: I20250114 13:22:28.321127 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:22:28.321659 update_engine[1688]: I20250114 13:22:28.321333 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:22:28.321659 update_engine[1688]: I20250114 13:22:28.321563 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:22:28.321926 locksmithd[1733]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 13:22:28.329526 update_engine[1688]: E20250114 13:22:28.329483 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:22:28.329650 update_engine[1688]: I20250114 13:22:28.329553 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 13:22:28.329650 update_engine[1688]: I20250114 13:22:28.329566 1688 omaha_request_action.cc:617] Omaha request response: Jan 14 13:22:28.329650 update_engine[1688]: I20250114 13:22:28.329577 1688 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 13:22:28.329650 update_engine[1688]: I20250114 13:22:28.329584 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 13:22:28.329650 update_engine[1688]: I20250114 13:22:28.329591 1688 update_attempter.cc:306] Processing Done. Jan 14 13:22:28.329650 update_engine[1688]: I20250114 13:22:28.329600 1688 update_attempter.cc:310] Error event sent. Jan 14 13:22:28.329650 update_engine[1688]: I20250114 13:22:28.329625 1688 update_check_scheduler.cc:74] Next update check in 45m54s Jan 14 13:22:28.330016 locksmithd[1733]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 13:22:28.507444 sshd[4817]: Accepted publickey for core from 10.200.16.10 port 58912 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:28.509236 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:28.514236 systemd-logind[1684]: New session 11 of user core. Jan 14 13:22:28.518774 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:22:28.997327 sshd[4819]: Connection closed by 10.200.16.10 port 58912 Jan 14 13:22:28.998269 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:29.002441 systemd[1]: sshd@8-10.200.4.47:22-10.200.16.10:58912.service: Deactivated successfully. Jan 14 13:22:29.004744 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:22:29.005564 systemd-logind[1684]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:22:29.006632 systemd-logind[1684]: Removed session 11. Jan 14 13:22:34.110962 systemd[1]: Started sshd@9-10.200.4.47:22-10.200.16.10:58928.service - OpenSSH per-connection server daemon (10.200.16.10:58928). Jan 14 13:22:34.720207 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 58928 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:34.722101 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:34.728132 systemd-logind[1684]: New session 12 of user core. Jan 14 13:22:34.731779 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:22:35.212290 sshd[4833]: Connection closed by 10.200.16.10 port 58928 Jan 14 13:22:35.213158 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:35.217043 systemd[1]: sshd@9-10.200.4.47:22-10.200.16.10:58928.service: Deactivated successfully. Jan 14 13:22:35.219456 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:22:35.220642 systemd-logind[1684]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:22:35.221685 systemd-logind[1684]: Removed session 12. Jan 14 13:22:40.325951 systemd[1]: Started sshd@10-10.200.4.47:22-10.200.16.10:45828.service - OpenSSH per-connection server daemon (10.200.16.10:45828). Jan 14 13:22:40.932539 sshd[4847]: Accepted publickey for core from 10.200.16.10 port 45828 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:40.934127 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:40.939756 systemd-logind[1684]: New session 13 of user core. Jan 14 13:22:40.956828 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:22:41.426195 sshd[4849]: Connection closed by 10.200.16.10 port 45828 Jan 14 13:22:41.427064 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:41.431545 systemd[1]: sshd@10-10.200.4.47:22-10.200.16.10:45828.service: Deactivated successfully. Jan 14 13:22:41.434146 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:22:41.435247 systemd-logind[1684]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:22:41.436632 systemd-logind[1684]: Removed session 13. Jan 14 13:22:46.534844 systemd[1]: Started sshd@11-10.200.4.47:22-10.200.16.10:57546.service - OpenSSH per-connection server daemon (10.200.16.10:57546). Jan 14 13:22:47.163652 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 57546 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:47.165161 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:47.169271 systemd-logind[1684]: New session 14 of user core. Jan 14 13:22:47.174770 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:22:47.658395 sshd[4864]: Connection closed by 10.200.16.10 port 57546 Jan 14 13:22:47.659819 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:47.664241 systemd-logind[1684]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:22:47.665194 systemd[1]: sshd@11-10.200.4.47:22-10.200.16.10:57546.service: Deactivated successfully. Jan 14 13:22:47.667814 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:22:47.668870 systemd-logind[1684]: Removed session 14. Jan 14 13:22:47.773924 systemd[1]: Started sshd@12-10.200.4.47:22-10.200.16.10:57550.service - OpenSSH per-connection server daemon (10.200.16.10:57550). Jan 14 13:22:48.386863 sshd[4876]: Accepted publickey for core from 10.200.16.10 port 57550 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:48.388450 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:48.393318 systemd-logind[1684]: New session 15 of user core. Jan 14 13:22:48.397758 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:22:48.911687 sshd[4878]: Connection closed by 10.200.16.10 port 57550 Jan 14 13:22:48.913580 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:48.916600 systemd[1]: sshd@12-10.200.4.47:22-10.200.16.10:57550.service: Deactivated successfully. Jan 14 13:22:48.919910 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:22:48.921656 systemd-logind[1684]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:22:48.922795 systemd-logind[1684]: Removed session 15. Jan 14 13:22:49.019086 systemd[1]: Started sshd@13-10.200.4.47:22-10.200.16.10:57554.service - OpenSSH per-connection server daemon (10.200.16.10:57554). Jan 14 13:22:49.635804 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 57554 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:49.637430 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:49.641787 systemd-logind[1684]: New session 16 of user core. Jan 14 13:22:49.645792 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:22:50.141683 sshd[4889]: Connection closed by 10.200.16.10 port 57554 Jan 14 13:22:50.142572 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:50.147243 systemd-logind[1684]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:22:50.147318 systemd[1]: sshd@13-10.200.4.47:22-10.200.16.10:57554.service: Deactivated successfully. Jan 14 13:22:50.149978 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:22:50.152105 systemd-logind[1684]: Removed session 16. Jan 14 13:22:55.253949 systemd[1]: Started sshd@14-10.200.4.47:22-10.200.16.10:57566.service - OpenSSH per-connection server daemon (10.200.16.10:57566). Jan 14 13:22:55.861046 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 57566 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:55.862622 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:55.866839 systemd-logind[1684]: New session 17 of user core. Jan 14 13:22:55.870787 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:22:56.357040 sshd[4901]: Connection closed by 10.200.16.10 port 57566 Jan 14 13:22:56.357543 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:56.361662 systemd[1]: sshd@14-10.200.4.47:22-10.200.16.10:57566.service: Deactivated successfully. Jan 14 13:22:56.364000 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:22:56.366026 systemd-logind[1684]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:22:56.367324 systemd-logind[1684]: Removed session 17. Jan 14 13:23:01.469913 systemd[1]: Started sshd@15-10.200.4.47:22-10.200.16.10:43558.service - OpenSSH per-connection server daemon (10.200.16.10:43558). Jan 14 13:23:02.082198 sshd[4912]: Accepted publickey for core from 10.200.16.10 port 43558 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:02.083791 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:02.088031 systemd-logind[1684]: New session 18 of user core. Jan 14 13:23:02.092767 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:23:02.583985 sshd[4914]: Connection closed by 10.200.16.10 port 43558 Jan 14 13:23:02.584944 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:02.587927 systemd[1]: sshd@15-10.200.4.47:22-10.200.16.10:43558.service: Deactivated successfully. Jan 14 13:23:02.590246 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:23:02.591723 systemd-logind[1684]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:23:02.593057 systemd-logind[1684]: Removed session 18. Jan 14 13:23:02.692250 systemd[1]: Started sshd@16-10.200.4.47:22-10.200.16.10:43568.service - OpenSSH per-connection server daemon (10.200.16.10:43568). Jan 14 13:23:03.319273 sshd[4925]: Accepted publickey for core from 10.200.16.10 port 43568 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:03.321074 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:03.325540 systemd-logind[1684]: New session 19 of user core. Jan 14 13:23:03.331784 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:23:03.843071 sshd[4927]: Connection closed by 10.200.16.10 port 43568 Jan 14 13:23:03.844872 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:03.847641 systemd[1]: sshd@16-10.200.4.47:22-10.200.16.10:43568.service: Deactivated successfully. Jan 14 13:23:03.850014 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:23:03.851583 systemd-logind[1684]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:23:03.853293 systemd-logind[1684]: Removed session 19. Jan 14 13:23:03.958970 systemd[1]: Started sshd@17-10.200.4.47:22-10.200.16.10:43578.service - OpenSSH per-connection server daemon (10.200.16.10:43578). Jan 14 13:23:04.566195 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 43578 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:04.567758 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:04.572753 systemd-logind[1684]: New session 20 of user core. Jan 14 13:23:04.577798 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:23:06.515505 sshd[4938]: Connection closed by 10.200.16.10 port 43578 Jan 14 13:23:06.516385 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:06.521102 systemd[1]: sshd@17-10.200.4.47:22-10.200.16.10:43578.service: Deactivated successfully. Jan 14 13:23:06.523572 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:23:06.524580 systemd-logind[1684]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:23:06.525896 systemd-logind[1684]: Removed session 20. Jan 14 13:23:06.633987 systemd[1]: Started sshd@18-10.200.4.47:22-10.200.16.10:42840.service - OpenSSH per-connection server daemon (10.200.16.10:42840). Jan 14 13:23:07.247525 sshd[4954]: Accepted publickey for core from 10.200.16.10 port 42840 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:07.249345 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:07.255226 systemd-logind[1684]: New session 21 of user core. Jan 14 13:23:07.258797 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:23:07.833984 sshd[4958]: Connection closed by 10.200.16.10 port 42840 Jan 14 13:23:07.834886 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:07.839376 systemd[1]: sshd@18-10.200.4.47:22-10.200.16.10:42840.service: Deactivated successfully. Jan 14 13:23:07.841540 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:23:07.842728 systemd-logind[1684]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:23:07.844140 systemd-logind[1684]: Removed session 21. Jan 14 13:23:07.950914 systemd[1]: Started sshd@19-10.200.4.47:22-10.200.16.10:42856.service - OpenSSH per-connection server daemon (10.200.16.10:42856). Jan 14 13:23:08.561560 sshd[4967]: Accepted publickey for core from 10.200.16.10 port 42856 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:08.563322 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:08.568939 systemd-logind[1684]: New session 22 of user core. Jan 14 13:23:08.576764 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:23:09.046522 sshd[4969]: Connection closed by 10.200.16.10 port 42856 Jan 14 13:23:09.048291 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:09.051047 systemd[1]: sshd@19-10.200.4.47:22-10.200.16.10:42856.service: Deactivated successfully. Jan 14 13:23:09.053581 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:23:09.055454 systemd-logind[1684]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:23:09.056768 systemd-logind[1684]: Removed session 22. Jan 14 13:23:14.159954 systemd[1]: Started sshd@20-10.200.4.47:22-10.200.16.10:42864.service - OpenSSH per-connection server daemon (10.200.16.10:42864). Jan 14 13:23:14.763646 sshd[4982]: Accepted publickey for core from 10.200.16.10 port 42864 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:14.765742 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:14.770682 systemd-logind[1684]: New session 23 of user core. Jan 14 13:23:14.775767 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:23:15.253140 sshd[4984]: Connection closed by 10.200.16.10 port 42864 Jan 14 13:23:15.254047 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:15.258061 systemd-logind[1684]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:23:15.259042 systemd[1]: sshd@20-10.200.4.47:22-10.200.16.10:42864.service: Deactivated successfully. Jan 14 13:23:15.261167 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:23:15.262263 systemd-logind[1684]: Removed session 23. Jan 14 13:23:20.583878 systemd[1]: Started sshd@21-10.200.4.47:22-10.200.16.10:43836.service - OpenSSH per-connection server daemon (10.200.16.10:43836). Jan 14 13:23:21.307884 sshd[4994]: Accepted publickey for core from 10.200.16.10 port 43836 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:21.309633 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:21.314896 systemd-logind[1684]: New session 24 of user core. Jan 14 13:23:21.320762 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:23:21.919637 sshd[4998]: Connection closed by 10.200.16.10 port 43836 Jan 14 13:23:21.920499 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:21.923943 systemd[1]: sshd@21-10.200.4.47:22-10.200.16.10:43836.service: Deactivated successfully. Jan 14 13:23:21.926357 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:23:21.927912 systemd-logind[1684]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:23:21.929040 systemd-logind[1684]: Removed session 24. Jan 14 13:23:27.026864 systemd[1]: Started sshd@22-10.200.4.47:22-10.200.16.10:46462.service - OpenSSH per-connection server daemon (10.200.16.10:46462). Jan 14 13:23:27.638278 sshd[5009]: Accepted publickey for core from 10.200.16.10 port 46462 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:27.640102 sshd-session[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:27.646571 systemd-logind[1684]: New session 25 of user core. Jan 14 13:23:27.650802 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:23:28.130246 sshd[5011]: Connection closed by 10.200.16.10 port 46462 Jan 14 13:23:28.131900 sshd-session[5009]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:28.135010 systemd[1]: sshd@22-10.200.4.47:22-10.200.16.10:46462.service: Deactivated successfully. Jan 14 13:23:28.137496 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:23:28.139347 systemd-logind[1684]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:23:28.140394 systemd-logind[1684]: Removed session 25. Jan 14 13:23:28.238285 systemd[1]: Started sshd@23-10.200.4.47:22-10.200.16.10:46466.service - OpenSSH per-connection server daemon (10.200.16.10:46466). Jan 14 13:23:28.849420 sshd[5022]: Accepted publickey for core from 10.200.16.10 port 46466 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:28.851255 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:28.856284 systemd-logind[1684]: New session 26 of user core. Jan 14 13:23:28.861776 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:23:30.488133 containerd[1713]: time="2025-01-14T13:23:30.487914051Z" level=info msg="StopContainer for \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\" with timeout 30 (s)" Jan 14 13:23:30.491093 containerd[1713]: time="2025-01-14T13:23:30.490151793Z" level=info msg="Stop container \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\" with signal terminated" Jan 14 13:23:30.502113 systemd[1]: run-containerd-runc-k8s.io-629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325-runc.0yrLeO.mount: Deactivated successfully. Jan 14 13:23:30.515331 containerd[1713]: time="2025-01-14T13:23:30.515289068Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:23:30.520455 systemd[1]: cri-containerd-9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce.scope: Deactivated successfully. Jan 14 13:23:30.525684 containerd[1713]: time="2025-01-14T13:23:30.525639163Z" level=info msg="StopContainer for \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\" with timeout 2 (s)" Jan 14 13:23:30.526320 containerd[1713]: time="2025-01-14T13:23:30.526274175Z" level=info msg="Stop container \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\" with signal terminated" Jan 14 13:23:30.540852 systemd-networkd[1438]: lxc_health: Link DOWN Jan 14 13:23:30.540862 systemd-networkd[1438]: lxc_health: Lost carrier Jan 14 13:23:30.559817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce-rootfs.mount: Deactivated successfully. Jan 14 13:23:30.563007 systemd[1]: cri-containerd-629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325.scope: Deactivated successfully. Jan 14 13:23:30.563315 systemd[1]: cri-containerd-629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325.scope: Consumed 7.244s CPU time. Jan 14 13:23:30.590358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325-rootfs.mount: Deactivated successfully. Jan 14 13:23:30.646202 containerd[1713]: time="2025-01-14T13:23:30.646128238Z" level=info msg="shim disconnected" id=629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325 namespace=k8s.io Jan 14 13:23:30.646202 containerd[1713]: time="2025-01-14T13:23:30.646207539Z" level=warning msg="cleaning up after shim disconnected" id=629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325 namespace=k8s.io Jan 14 13:23:30.646648 containerd[1713]: time="2025-01-14T13:23:30.646220539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:30.647639 containerd[1713]: time="2025-01-14T13:23:30.647255559Z" level=info msg="shim disconnected" id=9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce namespace=k8s.io Jan 14 13:23:30.647639 containerd[1713]: time="2025-01-14T13:23:30.647307960Z" level=warning msg="cleaning up after shim disconnected" id=9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce namespace=k8s.io Jan 14 13:23:30.647639 containerd[1713]: time="2025-01-14T13:23:30.647319960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:30.677070 containerd[1713]: time="2025-01-14T13:23:30.676964920Z" level=info msg="StopContainer for \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\" returns successfully" Jan 14 13:23:30.677237 containerd[1713]: time="2025-01-14T13:23:30.676995420Z" level=info msg="StopContainer for \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\" returns successfully" Jan 14 13:23:30.677982 containerd[1713]: time="2025-01-14T13:23:30.677945938Z" level=info msg="StopPodSandbox for \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\"" Jan 14 13:23:30.678121 containerd[1713]: time="2025-01-14T13:23:30.677994939Z" level=info msg="Container to stop \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:23:30.678121 containerd[1713]: time="2025-01-14T13:23:30.678039740Z" level=info msg="Container to stop \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:23:30.678121 containerd[1713]: time="2025-01-14T13:23:30.678052640Z" level=info msg="Container to stop \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:23:30.678121 containerd[1713]: time="2025-01-14T13:23:30.678066741Z" level=info msg="Container to stop \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:23:30.678121 containerd[1713]: time="2025-01-14T13:23:30.678078441Z" level=info msg="Container to stop \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:23:30.678510 containerd[1713]: time="2025-01-14T13:23:30.678357746Z" level=info msg="StopPodSandbox for \"7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a\"" Jan 14 13:23:30.678510 containerd[1713]: time="2025-01-14T13:23:30.678396047Z" level=info msg="Container to stop \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:23:30.682669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c-shm.mount: Deactivated successfully. Jan 14 13:23:30.682803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a-shm.mount: Deactivated successfully. Jan 14 13:23:30.690201 systemd[1]: cri-containerd-19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c.scope: Deactivated successfully. Jan 14 13:23:30.694087 systemd[1]: cri-containerd-7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a.scope: Deactivated successfully. Jan 14 13:23:30.734333 containerd[1713]: time="2025-01-14T13:23:30.734203801Z" level=info msg="shim disconnected" id=7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a namespace=k8s.io Jan 14 13:23:30.734333 containerd[1713]: time="2025-01-14T13:23:30.734275702Z" level=warning msg="cleaning up after shim disconnected" id=7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a namespace=k8s.io Jan 14 13:23:30.734333 containerd[1713]: time="2025-01-14T13:23:30.734287602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:30.737117 containerd[1713]: time="2025-01-14T13:23:30.736894051Z" level=info msg="shim disconnected" id=19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c namespace=k8s.io Jan 14 13:23:30.737117 containerd[1713]: time="2025-01-14T13:23:30.736947852Z" level=warning msg="cleaning up after shim disconnected" id=19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c namespace=k8s.io Jan 14 13:23:30.737117 containerd[1713]: time="2025-01-14T13:23:30.736958653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:30.753847 containerd[1713]: time="2025-01-14T13:23:30.753596667Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:23:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:23:30.755725 containerd[1713]: time="2025-01-14T13:23:30.755578504Z" level=info msg="TearDown network for sandbox \"7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a\" successfully" Jan 14 13:23:30.755725 containerd[1713]: time="2025-01-14T13:23:30.755648005Z" level=info msg="StopPodSandbox for \"7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a\" returns successfully" Jan 14 13:23:30.760730 containerd[1713]: time="2025-01-14T13:23:30.760595499Z" level=info msg="TearDown network for sandbox \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" successfully" Jan 14 13:23:30.760730 containerd[1713]: time="2025-01-14T13:23:30.760722301Z" level=info msg="StopPodSandbox for \"19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c\" returns successfully" Jan 14 13:23:30.791217 kubelet[3424]: I0114 13:23:30.791085 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hubble-tls\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792271 kubelet[3424]: I0114 13:23:30.792247 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cni-path\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792377 kubelet[3424]: I0114 13:23:30.792301 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-config-path\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792377 kubelet[3424]: I0114 13:23:30.792328 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-xtables-lock\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792377 kubelet[3424]: I0114 13:23:30.792356 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d53f8a3f-b093-4949-80d5-de3a37614046-cilium-config-path\") pod \"d53f8a3f-b093-4949-80d5-de3a37614046\" (UID: \"d53f8a3f-b093-4949-80d5-de3a37614046\") " Jan 14 13:23:30.792516 kubelet[3424]: I0114 13:23:30.792380 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-net\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792516 kubelet[3424]: I0114 13:23:30.792406 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-run\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792516 kubelet[3424]: I0114 13:23:30.792433 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-kernel\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792516 kubelet[3424]: I0114 13:23:30.792462 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-clustermesh-secrets\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792516 kubelet[3424]: I0114 13:23:30.792491 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq7k6\" (UniqueName: \"kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-kube-api-access-sq7k6\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792516 kubelet[3424]: I0114 13:23:30.792518 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-lib-modules\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792778 kubelet[3424]: I0114 13:23:30.792541 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hostproc\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792778 kubelet[3424]: I0114 13:23:30.792572 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tshgx\" (UniqueName: \"kubernetes.io/projected/d53f8a3f-b093-4949-80d5-de3a37614046-kube-api-access-tshgx\") pod \"d53f8a3f-b093-4949-80d5-de3a37614046\" (UID: \"d53f8a3f-b093-4949-80d5-de3a37614046\") " Jan 14 13:23:30.792778 kubelet[3424]: I0114 13:23:30.792598 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-etc-cni-netd\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792778 kubelet[3424]: I0114 13:23:30.792634 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-cgroup\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792778 kubelet[3424]: I0114 13:23:30.792660 3424 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-bpf-maps\") pod \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\" (UID: \"6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6\") " Jan 14 13:23:30.792778 kubelet[3424]: I0114 13:23:30.792712 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.793027 kubelet[3424]: I0114 13:23:30.792750 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cni-path" (OuterVolumeSpecName: "cni-path") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.796654 kubelet[3424]: I0114 13:23:30.795032 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:23:30.796654 kubelet[3424]: I0114 13:23:30.795091 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.798015 kubelet[3424]: I0114 13:23:30.797981 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53f8a3f-b093-4949-80d5-de3a37614046-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d53f8a3f-b093-4949-80d5-de3a37614046" (UID: "d53f8a3f-b093-4949-80d5-de3a37614046"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:23:30.798098 kubelet[3424]: I0114 13:23:30.798031 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.798098 kubelet[3424]: I0114 13:23:30.798054 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hostproc" (OuterVolumeSpecName: "hostproc") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.798462 kubelet[3424]: I0114 13:23:30.798425 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.798547 kubelet[3424]: I0114 13:23:30.798470 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.798547 kubelet[3424]: I0114 13:23:30.798491 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.800525 kubelet[3424]: I0114 13:23:30.800128 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.800525 kubelet[3424]: I0114 13:23:30.800169 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:23:30.800525 kubelet[3424]: I0114 13:23:30.800248 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:23:30.806437 kubelet[3424]: I0114 13:23:30.806410 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-kube-api-access-sq7k6" (OuterVolumeSpecName: "kube-api-access-sq7k6") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "kube-api-access-sq7k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:23:30.808373 kubelet[3424]: I0114 13:23:30.808343 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53f8a3f-b093-4949-80d5-de3a37614046-kube-api-access-tshgx" (OuterVolumeSpecName: "kube-api-access-tshgx") pod "d53f8a3f-b093-4949-80d5-de3a37614046" (UID: "d53f8a3f-b093-4949-80d5-de3a37614046"). InnerVolumeSpecName "kube-api-access-tshgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:23:30.808458 kubelet[3424]: I0114 13:23:30.808441 3424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" (UID: "6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 14 13:23:30.893852 kubelet[3424]: I0114 13:23:30.893805 3424 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cni-path\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.893852 kubelet[3424]: I0114 13:23:30.893851 3424 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-config-path\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.893852 kubelet[3424]: I0114 13:23:30.893870 3424 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hubble-tls\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.893889 3424 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d53f8a3f-b093-4949-80d5-de3a37614046-cilium-config-path\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.893911 3424 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-xtables-lock\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.893927 3424 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-run\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.893946 3424 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-kernel\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.893962 3424 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-clustermesh-secrets\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.893979 3424 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sq7k6\" (UniqueName: \"kubernetes.io/projected/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-kube-api-access-sq7k6\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.893997 3424 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-host-proc-sys-net\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894173 kubelet[3424]: I0114 13:23:30.894014 3424 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-lib-modules\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894414 kubelet[3424]: I0114 13:23:30.894030 3424 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-hostproc\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894414 kubelet[3424]: I0114 13:23:30.894048 3424 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tshgx\" (UniqueName: \"kubernetes.io/projected/d53f8a3f-b093-4949-80d5-de3a37614046-kube-api-access-tshgx\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894414 kubelet[3424]: I0114 13:23:30.894065 3424 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-etc-cni-netd\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894414 kubelet[3424]: I0114 13:23:30.894086 3424 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-cilium-cgroup\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.894414 kubelet[3424]: I0114 13:23:30.894103 3424 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6-bpf-maps\") on node \"ci-4152.2.0-a-ae9609fe4e\" DevicePath \"\"" Jan 14 13:23:30.897671 kubelet[3424]: E0114 13:23:30.897600 3424 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:23:31.237659 kubelet[3424]: I0114 13:23:31.237196 3424 scope.go:117] "RemoveContainer" containerID="9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce" Jan 14 13:23:31.244837 containerd[1713]: time="2025-01-14T13:23:31.239837846Z" level=info msg="RemoveContainer for \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\"" Jan 14 13:23:31.245388 systemd[1]: Removed slice kubepods-besteffort-podd53f8a3f_b093_4949_80d5_de3a37614046.slice - libcontainer container kubepods-besteffort-podd53f8a3f_b093_4949_80d5_de3a37614046.slice. Jan 14 13:23:31.253826 systemd[1]: Removed slice kubepods-burstable-pod6bc4c6e2_6ac4_448f_a610_1d7a138ae1d6.slice - libcontainer container kubepods-burstable-pod6bc4c6e2_6ac4_448f_a610_1d7a138ae1d6.slice. Jan 14 13:23:31.254190 systemd[1]: kubepods-burstable-pod6bc4c6e2_6ac4_448f_a610_1d7a138ae1d6.slice: Consumed 7.326s CPU time. Jan 14 13:23:31.256190 containerd[1713]: time="2025-01-14T13:23:31.256112054Z" level=info msg="RemoveContainer for \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\" returns successfully" Jan 14 13:23:31.256536 kubelet[3424]: I0114 13:23:31.256497 3424 scope.go:117] "RemoveContainer" containerID="9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce" Jan 14 13:23:31.256802 containerd[1713]: time="2025-01-14T13:23:31.256761366Z" level=error msg="ContainerStatus for \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\": not found" Jan 14 13:23:31.257031 kubelet[3424]: E0114 13:23:31.256978 3424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\": not found" containerID="9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce" Jan 14 13:23:31.257146 kubelet[3424]: I0114 13:23:31.257124 3424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce"} err="failed to get container status \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"9557733207b3911fe8c7ac24ff09f5057ea95ae08231cc56f8d1859b626e91ce\": not found" Jan 14 13:23:31.257199 kubelet[3424]: I0114 13:23:31.257159 3424 scope.go:117] "RemoveContainer" containerID="629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325" Jan 14 13:23:31.258307 containerd[1713]: time="2025-01-14T13:23:31.258280695Z" level=info msg="RemoveContainer for \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\"" Jan 14 13:23:31.265363 containerd[1713]: time="2025-01-14T13:23:31.265325728Z" level=info msg="RemoveContainer for \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\" returns successfully" Jan 14 13:23:31.265522 kubelet[3424]: I0114 13:23:31.265496 3424 scope.go:117] "RemoveContainer" containerID="d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c" Jan 14 13:23:31.266838 containerd[1713]: time="2025-01-14T13:23:31.266650953Z" level=info msg="RemoveContainer for \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\"" Jan 14 13:23:31.274356 containerd[1713]: time="2025-01-14T13:23:31.274322897Z" level=info msg="RemoveContainer for \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\" returns successfully" Jan 14 13:23:31.274551 kubelet[3424]: I0114 13:23:31.274531 3424 scope.go:117] "RemoveContainer" containerID="be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c" Jan 14 13:23:31.275992 containerd[1713]: time="2025-01-14T13:23:31.275968028Z" level=info msg="RemoveContainer for \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\"" Jan 14 13:23:31.284253 containerd[1713]: time="2025-01-14T13:23:31.284217284Z" level=info msg="RemoveContainer for \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\" returns successfully" Jan 14 13:23:31.284743 kubelet[3424]: I0114 13:23:31.284685 3424 scope.go:117] "RemoveContainer" containerID="e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d" Jan 14 13:23:31.287157 containerd[1713]: time="2025-01-14T13:23:31.287123039Z" level=info msg="RemoveContainer for \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\"" Jan 14 13:23:31.295533 containerd[1713]: time="2025-01-14T13:23:31.295505297Z" level=info msg="RemoveContainer for \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\" returns successfully" Jan 14 13:23:31.295711 kubelet[3424]: I0114 13:23:31.295692 3424 scope.go:117] "RemoveContainer" containerID="f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2" Jan 14 13:23:31.296830 containerd[1713]: time="2025-01-14T13:23:31.296785322Z" level=info msg="RemoveContainer for \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\"" Jan 14 13:23:31.308661 containerd[1713]: time="2025-01-14T13:23:31.308631845Z" level=info msg="RemoveContainer for \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\" returns successfully" Jan 14 13:23:31.308874 kubelet[3424]: I0114 13:23:31.308852 3424 scope.go:117] "RemoveContainer" containerID="629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325" Jan 14 13:23:31.309103 containerd[1713]: time="2025-01-14T13:23:31.309063553Z" level=error msg="ContainerStatus for \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\": not found" Jan 14 13:23:31.309222 kubelet[3424]: E0114 13:23:31.309202 3424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\": not found" containerID="629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325" Jan 14 13:23:31.309311 kubelet[3424]: I0114 13:23:31.309245 3424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325"} err="failed to get container status \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\": rpc error: code = NotFound desc = an error occurred when try to find container \"629be97bd419a6bd9574e895bf898832478dff0bf51ae5312d018a92d6ec5325\": not found" Jan 14 13:23:31.309311 kubelet[3424]: I0114 13:23:31.309259 3424 scope.go:117] "RemoveContainer" containerID="d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c" Jan 14 13:23:31.309479 containerd[1713]: time="2025-01-14T13:23:31.309426460Z" level=error msg="ContainerStatus for \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\": not found" Jan 14 13:23:31.309642 kubelet[3424]: E0114 13:23:31.309601 3424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\": not found" containerID="d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c" Jan 14 13:23:31.309710 kubelet[3424]: I0114 13:23:31.309660 3424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c"} err="failed to get container status \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9ef2623d011e774692b467b91fb56e92d0bf16bee2adf9f105358175fcf031c\": not found" Jan 14 13:23:31.309710 kubelet[3424]: I0114 13:23:31.309674 3424 scope.go:117] "RemoveContainer" containerID="be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c" Jan 14 13:23:31.309951 containerd[1713]: time="2025-01-14T13:23:31.309923370Z" level=error msg="ContainerStatus for \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\": not found" Jan 14 13:23:31.310082 kubelet[3424]: E0114 13:23:31.310043 3424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\": not found" containerID="be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c" Jan 14 13:23:31.310082 kubelet[3424]: I0114 13:23:31.310074 3424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c"} err="failed to get container status \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\": rpc error: code = NotFound desc = an error occurred when try to find container \"be7fb81b4cd96f1c85105c300c6990f6998eb236a8215af288074cdca401c88c\": not found" Jan 14 13:23:31.310214 kubelet[3424]: I0114 13:23:31.310086 3424 scope.go:117] "RemoveContainer" containerID="e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d" Jan 14 13:23:31.310271 containerd[1713]: time="2025-01-14T13:23:31.310245476Z" level=error msg="ContainerStatus for \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\": not found" Jan 14 13:23:31.310424 kubelet[3424]: E0114 13:23:31.310368 3424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\": not found" containerID="e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d" Jan 14 13:23:31.310424 kubelet[3424]: I0114 13:23:31.310407 3424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d"} err="failed to get container status \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1dfd1a566485fe93bdffb08732dcefa856eff5c62e9c044cad1f53de2d8393d\": not found" Jan 14 13:23:31.310424 kubelet[3424]: I0114 13:23:31.310422 3424 scope.go:117] "RemoveContainer" containerID="f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2" Jan 14 13:23:31.310652 containerd[1713]: time="2025-01-14T13:23:31.310587982Z" level=error msg="ContainerStatus for \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\": not found" Jan 14 13:23:31.310821 kubelet[3424]: E0114 13:23:31.310732 3424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\": not found" containerID="f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2" Jan 14 13:23:31.310821 kubelet[3424]: I0114 13:23:31.310756 3424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2"} err="failed to get container status \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6af6eeb33871d779fd1f7bc9684e8586c98b539b7279dac378f2be74dac0ad2\": not found" Jan 14 13:23:31.485579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19e7ec188f6f2ccc96235379a194cf7e27bf0acdfa395f719814a342c7c7b30c-rootfs.mount: Deactivated successfully. Jan 14 13:23:31.485736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f16e1882834e4f35bd0f9a5b338205143dd67bfd77f6caec57a8dccbfc2015a-rootfs.mount: Deactivated successfully. Jan 14 13:23:31.485822 systemd[1]: var-lib-kubelet-pods-6bc4c6e2\x2d6ac4\x2d448f\x2da610\x2d1d7a138ae1d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsq7k6.mount: Deactivated successfully. Jan 14 13:23:31.485905 systemd[1]: var-lib-kubelet-pods-d53f8a3f\x2db093\x2d4949\x2d80d5\x2dde3a37614046-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtshgx.mount: Deactivated successfully. Jan 14 13:23:31.485989 systemd[1]: var-lib-kubelet-pods-6bc4c6e2\x2d6ac4\x2d448f\x2da610\x2d1d7a138ae1d6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 14 13:23:31.486070 systemd[1]: var-lib-kubelet-pods-6bc4c6e2\x2d6ac4\x2d448f\x2da610\x2d1d7a138ae1d6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 14 13:23:32.524184 sshd[5024]: Connection closed by 10.200.16.10 port 46466 Jan 14 13:23:32.525267 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:32.529706 systemd-logind[1684]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:23:32.530629 systemd[1]: sshd@23-10.200.4.47:22-10.200.16.10:46466.service: Deactivated successfully. Jan 14 13:23:32.532981 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:23:32.534284 systemd-logind[1684]: Removed session 26. Jan 14 13:23:32.636918 systemd[1]: Started sshd@24-10.200.4.47:22-10.200.16.10:46470.service - OpenSSH per-connection server daemon (10.200.16.10:46470). Jan 14 13:23:32.788196 kubelet[3424]: I0114 13:23:32.788045 3424 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" path="/var/lib/kubelet/pods/6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6/volumes" Jan 14 13:23:32.789382 kubelet[3424]: I0114 13:23:32.789342 3424 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d53f8a3f-b093-4949-80d5-de3a37614046" path="/var/lib/kubelet/pods/d53f8a3f-b093-4949-80d5-de3a37614046/volumes" Jan 14 13:23:33.248813 sshd[5185]: Accepted publickey for core from 10.200.16.10 port 46470 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:33.251427 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:33.257467 systemd-logind[1684]: New session 27 of user core. Jan 14 13:23:33.264099 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:23:34.034712 kubelet[3424]: I0114 13:23:34.034667 3424 topology_manager.go:215] "Topology Admit Handler" podUID="76b618bf-1e8c-4156-883f-ba8bfb44d720" podNamespace="kube-system" podName="cilium-wszlh" Jan 14 13:23:34.035213 kubelet[3424]: E0114 13:23:34.034746 3424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d53f8a3f-b093-4949-80d5-de3a37614046" containerName="cilium-operator" Jan 14 13:23:34.035213 kubelet[3424]: E0114 13:23:34.034760 3424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" containerName="mount-cgroup" Jan 14 13:23:34.035213 kubelet[3424]: E0114 13:23:34.034772 3424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" containerName="apply-sysctl-overwrites" Jan 14 13:23:34.035213 kubelet[3424]: E0114 13:23:34.034781 3424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" containerName="clean-cilium-state" Jan 14 13:23:34.035213 kubelet[3424]: E0114 13:23:34.034791 3424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" containerName="mount-bpf-fs" Jan 14 13:23:34.035213 kubelet[3424]: E0114 13:23:34.034800 3424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" containerName="cilium-agent" Jan 14 13:23:34.035213 kubelet[3424]: I0114 13:23:34.034830 3424 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53f8a3f-b093-4949-80d5-de3a37614046" containerName="cilium-operator" Jan 14 13:23:34.035213 kubelet[3424]: I0114 13:23:34.034840 3424 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc4c6e2-6ac4-448f-a610-1d7a138ae1d6" containerName="cilium-agent" Jan 14 13:23:34.051277 systemd[1]: Created slice kubepods-burstable-pod76b618bf_1e8c_4156_883f_ba8bfb44d720.slice - libcontainer container kubepods-burstable-pod76b618bf_1e8c_4156_883f_ba8bfb44d720.slice. Jan 14 13:23:34.109210 kubelet[3424]: I0114 13:23:34.109108 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-cilium-run\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109411 kubelet[3424]: I0114 13:23:34.109243 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-host-proc-sys-net\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109411 kubelet[3424]: I0114 13:23:34.109305 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-host-proc-sys-kernel\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109411 kubelet[3424]: I0114 13:23:34.109340 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-etc-cni-netd\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109411 kubelet[3424]: I0114 13:23:34.109390 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-bpf-maps\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109636 kubelet[3424]: I0114 13:23:34.109418 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-cni-path\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109636 kubelet[3424]: I0114 13:23:34.109490 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-cilium-cgroup\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109636 kubelet[3424]: I0114 13:23:34.109557 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76b618bf-1e8c-4156-883f-ba8bfb44d720-hubble-tls\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109636 kubelet[3424]: I0114 13:23:34.109589 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-hostproc\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109814 kubelet[3424]: I0114 13:23:34.109654 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-xtables-lock\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.109814 kubelet[3424]: I0114 13:23:34.109731 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/76b618bf-1e8c-4156-883f-ba8bfb44d720-cilium-ipsec-secrets\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.110249 kubelet[3424]: I0114 13:23:34.110180 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t9w8\" (UniqueName: \"kubernetes.io/projected/76b618bf-1e8c-4156-883f-ba8bfb44d720-kube-api-access-2t9w8\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.110362 kubelet[3424]: I0114 13:23:34.110341 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76b618bf-1e8c-4156-883f-ba8bfb44d720-lib-modules\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.110415 kubelet[3424]: I0114 13:23:34.110379 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76b618bf-1e8c-4156-883f-ba8bfb44d720-clustermesh-secrets\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.110560 kubelet[3424]: I0114 13:23:34.110476 3424 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76b618bf-1e8c-4156-883f-ba8bfb44d720-cilium-config-path\") pod \"cilium-wszlh\" (UID: \"76b618bf-1e8c-4156-883f-ba8bfb44d720\") " pod="kube-system/cilium-wszlh" Jan 14 13:23:34.117771 sshd[5187]: Connection closed by 10.200.16.10 port 46470 Jan 14 13:23:34.118981 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:34.129071 systemd[1]: sshd@24-10.200.4.47:22-10.200.16.10:46470.service: Deactivated successfully. Jan 14 13:23:34.132938 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:23:34.138111 systemd-logind[1684]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:23:34.141150 systemd-logind[1684]: Removed session 27. Jan 14 13:23:34.250901 systemd[1]: Started sshd@25-10.200.4.47:22-10.200.16.10:46474.service - OpenSSH per-connection server daemon (10.200.16.10:46474). Jan 14 13:23:34.356163 containerd[1713]: time="2025-01-14T13:23:34.355957452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wszlh,Uid:76b618bf-1e8c-4156-883f-ba8bfb44d720,Namespace:kube-system,Attempt:0,}" Jan 14 13:23:34.409880 containerd[1713]: time="2025-01-14T13:23:34.409549446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:23:34.409880 containerd[1713]: time="2025-01-14T13:23:34.409649247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:23:34.409880 containerd[1713]: time="2025-01-14T13:23:34.409674847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:34.409880 containerd[1713]: time="2025-01-14T13:23:34.409786348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:23:34.438797 systemd[1]: Started cri-containerd-26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa.scope - libcontainer container 26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa. Jan 14 13:23:34.465514 containerd[1713]: time="2025-01-14T13:23:34.465438161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wszlh,Uid:76b618bf-1e8c-4156-883f-ba8bfb44d720,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\"" Jan 14 13:23:34.471688 containerd[1713]: time="2025-01-14T13:23:34.471135414Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:23:34.509421 containerd[1713]: time="2025-01-14T13:23:34.509369867Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9\"" Jan 14 13:23:34.510013 containerd[1713]: time="2025-01-14T13:23:34.509919072Z" level=info msg="StartContainer for \"6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9\"" Jan 14 13:23:34.535788 systemd[1]: Started cri-containerd-6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9.scope - libcontainer container 6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9. Jan 14 13:23:34.565916 containerd[1713]: time="2025-01-14T13:23:34.565864588Z" level=info msg="StartContainer for \"6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9\" returns successfully" Jan 14 13:23:34.572878 systemd[1]: cri-containerd-6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9.scope: Deactivated successfully. Jan 14 13:23:34.629638 containerd[1713]: time="2025-01-14T13:23:34.629303773Z" level=info msg="shim disconnected" id=6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9 namespace=k8s.io Jan 14 13:23:34.629638 containerd[1713]: time="2025-01-14T13:23:34.629377674Z" level=warning msg="cleaning up after shim disconnected" id=6868d376477f3cd25d25888acf8dd7d0e74553d43155c215ef71d6c2e8f77ef9 namespace=k8s.io Jan 14 13:23:34.629638 containerd[1713]: time="2025-01-14T13:23:34.629388874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:34.849134 kubelet[3424]: I0114 13:23:34.849078 3424 setters.go:568] "Node became not ready" node="ci-4152.2.0-a-ae9609fe4e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-14T13:23:34Z","lastTransitionTime":"2025-01-14T13:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 14 13:23:34.873399 sshd[5201]: Accepted publickey for core from 10.200.16.10 port 46474 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:34.874914 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:34.882725 systemd-logind[1684]: New session 28 of user core. Jan 14 13:23:34.886793 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:23:35.260478 containerd[1713]: time="2025-01-14T13:23:35.260341694Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:23:35.295554 containerd[1713]: time="2025-01-14T13:23:35.295503018Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e\"" Jan 14 13:23:35.296271 containerd[1713]: time="2025-01-14T13:23:35.296233425Z" level=info msg="StartContainer for \"b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e\"" Jan 14 13:23:35.302957 sshd[5310]: Connection closed by 10.200.16.10 port 46474 Jan 14 13:23:35.305694 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:35.310896 systemd-logind[1684]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:23:35.314657 systemd[1]: sshd@25-10.200.4.47:22-10.200.16.10:46474.service: Deactivated successfully. Jan 14 13:23:35.318241 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:23:35.324504 systemd-logind[1684]: Removed session 28. Jan 14 13:23:35.340779 systemd[1]: Started cri-containerd-b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e.scope - libcontainer container b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e. Jan 14 13:23:35.373045 containerd[1713]: time="2025-01-14T13:23:35.372804531Z" level=info msg="StartContainer for \"b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e\" returns successfully" Jan 14 13:23:35.378097 systemd[1]: cri-containerd-b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e.scope: Deactivated successfully. Jan 14 13:23:35.412992 systemd[1]: Started sshd@26-10.200.4.47:22-10.200.16.10:46490.service - OpenSSH per-connection server daemon (10.200.16.10:46490). Jan 14 13:23:35.422012 containerd[1713]: time="2025-01-14T13:23:35.421758083Z" level=info msg="shim disconnected" id=b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e namespace=k8s.io Jan 14 13:23:35.422012 containerd[1713]: time="2025-01-14T13:23:35.421843883Z" level=warning msg="cleaning up after shim disconnected" id=b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e namespace=k8s.io Jan 14 13:23:35.422012 containerd[1713]: time="2025-01-14T13:23:35.421855683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:35.785384 kubelet[3424]: E0114 13:23:35.785326 3424 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-9928g" podUID="0f7f1359-317a-4220-b773-bc368a4e321f" Jan 14 13:23:35.899485 kubelet[3424]: E0114 13:23:35.899436 3424 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:23:36.032063 sshd[5364]: Accepted publickey for core from 10.200.16.10 port 46490 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:36.033650 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:36.038170 systemd-logind[1684]: New session 29 of user core. Jan 14 13:23:36.042790 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 13:23:36.228755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b17500d353fb595f6c6b18fb30c241203a8f7b5de899dc9f7a64b228f8579c1e-rootfs.mount: Deactivated successfully. Jan 14 13:23:36.264765 containerd[1713]: time="2025-01-14T13:23:36.264706294Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:23:36.302655 containerd[1713]: time="2025-01-14T13:23:36.300889371Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c\"" Jan 14 13:23:36.302655 containerd[1713]: time="2025-01-14T13:23:36.301762487Z" level=info msg="StartContainer for \"8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c\"" Jan 14 13:23:36.339768 systemd[1]: Started cri-containerd-8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c.scope - libcontainer container 8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c. Jan 14 13:23:36.406311 containerd[1713]: time="2025-01-14T13:23:36.406233842Z" level=info msg="StartContainer for \"8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c\" returns successfully" Jan 14 13:23:36.409410 systemd[1]: cri-containerd-8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c.scope: Deactivated successfully. Jan 14 13:23:36.469104 containerd[1713]: time="2025-01-14T13:23:36.468522307Z" level=info msg="shim disconnected" id=8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c namespace=k8s.io Jan 14 13:23:36.469405 containerd[1713]: time="2025-01-14T13:23:36.469383123Z" level=warning msg="cleaning up after shim disconnected" id=8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c namespace=k8s.io Jan 14 13:23:36.469493 containerd[1713]: time="2025-01-14T13:23:36.469477425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:37.228763 systemd[1]: run-containerd-runc-k8s.io-8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c-runc.NdiiJJ.mount: Deactivated successfully. Jan 14 13:23:37.228894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ffbff911227d8dbeff2775732e51e7a465d32314e71623b7bf5f715fc6d3c9c-rootfs.mount: Deactivated successfully. Jan 14 13:23:37.269597 containerd[1713]: time="2025-01-14T13:23:37.269363390Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:23:37.310035 containerd[1713]: time="2025-01-14T13:23:37.309985450Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4\"" Jan 14 13:23:37.310530 containerd[1713]: time="2025-01-14T13:23:37.310497759Z" level=info msg="StartContainer for \"73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4\"" Jan 14 13:23:37.348755 systemd[1]: Started cri-containerd-73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4.scope - libcontainer container 73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4. Jan 14 13:23:37.375589 systemd[1]: cri-containerd-73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4.scope: Deactivated successfully. Jan 14 13:23:37.380517 containerd[1713]: time="2025-01-14T13:23:37.380386367Z" level=info msg="StartContainer for \"73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4\" returns successfully" Jan 14 13:23:37.415348 containerd[1713]: time="2025-01-14T13:23:37.415270319Z" level=info msg="shim disconnected" id=73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4 namespace=k8s.io Jan 14 13:23:37.415348 containerd[1713]: time="2025-01-14T13:23:37.415342621Z" level=warning msg="cleaning up after shim disconnected" id=73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4 namespace=k8s.io Jan 14 13:23:37.415348 containerd[1713]: time="2025-01-14T13:23:37.415354321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:23:37.785501 kubelet[3424]: E0114 13:23:37.785446 3424 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-9928g" podUID="0f7f1359-317a-4220-b773-bc368a4e321f" Jan 14 13:23:38.228760 systemd[1]: run-containerd-runc-k8s.io-73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4-runc.xD9a8V.mount: Deactivated successfully. Jan 14 13:23:38.228880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73ab96be29b7ebf629bacd213b55590969308febe7ab95b8280ade0a36a689d4-rootfs.mount: Deactivated successfully. Jan 14 13:23:38.277599 containerd[1713]: time="2025-01-14T13:23:38.277405949Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:23:38.316529 containerd[1713]: time="2025-01-14T13:23:38.316480080Z" level=info msg="CreateContainer within sandbox \"26c7a4800431e2eb7825c4b8e56c42b7f6c15718079ac9845cff9e6a29ebebfa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1d3f1e7b85d1385bd67a28155b288cfbb5276451e11b93fbf8a150cd0eaac63\"" Jan 14 13:23:38.318224 containerd[1713]: time="2025-01-14T13:23:38.317186793Z" level=info msg="StartContainer for \"c1d3f1e7b85d1385bd67a28155b288cfbb5276451e11b93fbf8a150cd0eaac63\"" Jan 14 13:23:38.356893 systemd[1]: Started cri-containerd-c1d3f1e7b85d1385bd67a28155b288cfbb5276451e11b93fbf8a150cd0eaac63.scope - libcontainer container c1d3f1e7b85d1385bd67a28155b288cfbb5276451e11b93fbf8a150cd0eaac63. Jan 14 13:23:38.393056 containerd[1713]: time="2025-01-14T13:23:38.392752707Z" level=info msg="StartContainer for \"c1d3f1e7b85d1385bd67a28155b288cfbb5276451e11b93fbf8a150cd0eaac63\" returns successfully" Jan 14 13:23:38.901640 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 14 13:23:39.784926 kubelet[3424]: E0114 13:23:39.784871 3424 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-9928g" podUID="0f7f1359-317a-4220-b773-bc368a4e321f" Jan 14 13:23:40.626543 systemd[1]: run-containerd-runc-k8s.io-c1d3f1e7b85d1385bd67a28155b288cfbb5276451e11b93fbf8a150cd0eaac63-runc.QUPvLv.mount: Deactivated successfully. Jan 14 13:23:40.703713 kubelet[3424]: E0114 13:23:40.703532 3424 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54552->127.0.0.1:38511: write tcp 127.0.0.1:54552->127.0.0.1:38511: write: broken pipe Jan 14 13:23:41.822386 systemd-networkd[1438]: lxc_health: Link UP Jan 14 13:23:41.828741 systemd-networkd[1438]: lxc_health: Gained carrier Jan 14 13:23:42.392969 kubelet[3424]: I0114 13:23:42.392646 3424 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wszlh" podStartSLOduration=8.392562602 podStartE2EDuration="8.392562602s" podCreationTimestamp="2025-01-14 13:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:23:39.295446095 +0000 UTC m=+198.629333647" watchObservedRunningTime="2025-01-14 13:23:42.392562602 +0000 UTC m=+201.726450254" Jan 14 13:23:42.861187 systemd[1]: run-containerd-runc-k8s.io-c1d3f1e7b85d1385bd67a28155b288cfbb5276451e11b93fbf8a150cd0eaac63-runc.Xc8c0v.mount: Deactivated successfully. Jan 14 13:23:43.084780 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jan 14 13:23:47.437045 sshd[5378]: Connection closed by 10.200.16.10 port 46490 Jan 14 13:23:47.438122 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:47.441331 systemd[1]: sshd@26-10.200.4.47:22-10.200.16.10:46490.service: Deactivated successfully. Jan 14 13:23:47.443449 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 13:23:47.445371 systemd-logind[1684]: Session 29 logged out. Waiting for processes to exit. Jan 14 13:23:47.446628 systemd-logind[1684]: Removed session 29.