Jan 14 13:17:47.109255 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:17:47.109283 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:17:47.109294 kernel: BIOS-provided physical RAM map: Jan 14 13:17:47.109302 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:17:47.109309 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:17:47.109315 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:17:47.109323 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:17:47.109334 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:17:47.109340 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:17:47.109346 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:17:47.109356 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:17:47.109363 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:17:47.109369 kernel: NX (Execute Disable) protection: active Jan 14 13:17:47.109378 kernel: APIC: Static calls initialized Jan 14 13:17:47.109389 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:17:47.109397 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:17:47.109407 kernel: random: crng init done Jan 14 13:17:47.109414 kernel: secureboot: Secure boot disabled Jan 14 13:17:47.109420 kernel: SMBIOS 3.1.0 present. Jan 14 13:17:47.109431 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:17:47.109438 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:17:47.109445 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:17:47.109452 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:17:47.109462 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:17:47.109471 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:17:47.109479 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:17:47.109488 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:17:47.109495 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:17:47.109504 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:17:47.109513 kernel: tsc: Detected 2593.906 MHz processor Jan 14 13:17:47.109520 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:17:47.109529 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:17:47.109538 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:17:47.109548 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:17:47.109559 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:17:47.109566 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:17:47.109573 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:17:47.109583 kernel: Using GB pages for direct mapping Jan 14 13:17:47.109591 kernel: ACPI: Early table checksum verification disabled Jan 14 13:17:47.109601 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:17:47.109615 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109630 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109645 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:17:47.109659 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:17:47.109674 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109690 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109707 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109732 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109748 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109767 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109781 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109796 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:17:47.109820 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:17:47.109834 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:17:47.109849 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:17:47.109866 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:17:47.109888 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:17:47.109904 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:17:47.109922 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:17:47.109939 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:17:47.109953 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:17:47.109971 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:17:47.109986 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:17:47.110002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:17:47.110026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:17:47.110043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:17:47.110057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:17:47.110073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:17:47.110088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:17:47.110105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:17:47.110121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:17:47.110136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:17:47.110152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:17:47.110173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:17:47.110190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:17:47.110206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:17:47.110221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:17:47.110235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:17:47.110253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:17:47.110269 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:17:47.110285 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:17:47.110300 kernel: Zone ranges: Jan 14 13:17:47.110322 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:17:47.110336 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:17:47.110348 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:17:47.110362 kernel: Movable zone start for each node Jan 14 13:17:47.110377 kernel: Early memory node ranges Jan 14 13:17:47.110392 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:17:47.110409 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:17:47.110426 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:17:47.110443 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:17:47.110464 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:17:47.110480 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:17:47.110496 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:17:47.110510 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:17:47.110525 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:17:47.110539 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:17:47.110554 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:17:47.110571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:17:47.110587 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:17:47.110613 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:17:47.110630 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:17:47.110647 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:17:47.110663 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:17:47.110678 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:17:47.110694 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:17:47.110708 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:17:47.110723 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:17:47.110738 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:17:47.110760 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:17:47.110773 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:17:47.110790 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:17:47.110814 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:17:47.110830 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:17:47.110845 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:17:47.110860 kernel: Fallback order for Node 0: 0 Jan 14 13:17:47.110878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:17:47.110902 kernel: Policy zone: Normal Jan 14 13:17:47.110933 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:17:47.110950 kernel: software IO TLB: area num 2. Jan 14 13:17:47.110971 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:17:47.110987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:17:47.111003 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:17:47.111021 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:17:47.111036 kernel: Dynamic Preempt: voluntary Jan 14 13:17:47.111052 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:17:47.111071 kernel: rcu: RCU event tracing is enabled. Jan 14 13:17:47.111084 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:17:47.111099 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:17:47.111111 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:17:47.111123 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:17:47.111135 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:17:47.111148 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:17:47.111165 kernel: Using NULL legacy PIC Jan 14 13:17:47.111179 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:17:47.111193 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:17:47.111208 kernel: Console: colour dummy device 80x25 Jan 14 13:17:47.111223 kernel: printk: console [tty1] enabled Jan 14 13:17:47.111238 kernel: printk: console [ttyS0] enabled Jan 14 13:17:47.111255 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:17:47.111268 kernel: ACPI: Core revision 20230628 Jan 14 13:17:47.111282 kernel: Failed to register legacy timer interrupt Jan 14 13:17:47.111295 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:17:47.111313 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:17:47.111333 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:17:47.111346 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:17:47.111359 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:17:47.111372 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:17:47.111386 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:17:47.111401 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:17:47.111414 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:17:47.111429 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 13:17:47.111446 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:17:47.111460 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:17:47.111475 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:17:47.111489 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:17:47.111503 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:17:47.111518 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:17:47.111533 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:17:47.111548 kernel: RETBleed: Vulnerable Jan 14 13:17:47.111561 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:17:47.111574 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:17:47.111589 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:17:47.111602 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:17:47.111616 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:17:47.111631 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:17:47.111646 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:17:47.111661 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:17:47.111676 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:17:47.111690 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:17:47.111702 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:17:47.111715 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:17:47.111728 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:17:47.111745 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:17:47.111757 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:17:47.111772 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:17:47.111785 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:17:47.111798 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:17:47.111824 kernel: landlock: Up and running. Jan 14 13:17:47.111838 kernel: SELinux: Initializing. Jan 14 13:17:47.111852 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.111866 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.111879 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:17:47.111893 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:17:47.111913 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:17:47.111927 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:17:47.111943 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:17:47.111957 kernel: signal: max sigframe size: 3632 Jan 14 13:17:47.111972 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:17:47.111988 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:17:47.112003 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:17:47.112018 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:17:47.112034 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:17:47.112053 kernel: .... node #0, CPUs: #1 Jan 14 13:17:47.112069 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:17:47.112085 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:17:47.112100 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:17:47.112115 kernel: smpboot: Max logical packages: 1 Jan 14 13:17:47.112131 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:17:47.112146 kernel: devtmpfs: initialized Jan 14 13:17:47.112161 kernel: x86/mm: Memory block size: 128MB Jan 14 13:17:47.112180 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:17:47.112196 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:17:47.112212 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:17:47.112226 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:17:47.112241 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:17:47.112257 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:17:47.112276 kernel: audit: type=2000 audit(1736860665.029:1): state=initialized audit_enabled=0 res=1 Jan 14 13:17:47.112291 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:17:47.112306 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:17:47.112325 kernel: cpuidle: using governor menu Jan 14 13:17:47.112340 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:17:47.112355 kernel: dca service started, version 1.12.1 Jan 14 13:17:47.112369 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:17:47.112385 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:17:47.112399 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:17:47.112415 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:17:47.112429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:17:47.112444 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:17:47.112463 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:17:47.112478 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:17:47.112493 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:17:47.112508 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:17:47.112523 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:17:47.112539 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:17:47.112553 kernel: ACPI: Interpreter enabled Jan 14 13:17:47.112568 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:17:47.112583 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:17:47.112602 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:17:47.112618 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:17:47.112632 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:17:47.112647 kernel: iommu: Default domain type: Translated Jan 14 13:17:47.112663 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:17:47.112678 kernel: efivars: Registered efivars operations Jan 14 13:17:47.112694 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:17:47.112710 kernel: PCI: System does not support PCI Jan 14 13:17:47.112723 kernel: vgaarb: loaded Jan 14 13:17:47.112742 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:17:47.112757 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:17:47.112772 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:17:47.112788 kernel: pnp: PnP ACPI init Jan 14 13:17:47.112802 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:17:47.112827 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:17:47.112842 kernel: NET: Registered PF_INET protocol family Jan 14 13:17:47.112857 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:17:47.112872 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:17:47.112892 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:17:47.112907 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:17:47.112922 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:17:47.112937 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:17:47.112953 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.112968 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.112983 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:17:47.112997 kernel: NET: Registered PF_XDP protocol family Jan 14 13:17:47.113013 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:17:47.113031 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:17:47.113046 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:17:47.113061 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:17:47.113075 kernel: Initialise system trusted keyrings Jan 14 13:17:47.113090 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:17:47.113105 kernel: Key type asymmetric registered Jan 14 13:17:47.113120 kernel: Asymmetric key parser 'x509' registered Jan 14 13:17:47.113134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:17:47.113147 kernel: io scheduler mq-deadline registered Jan 14 13:17:47.113164 kernel: io scheduler kyber registered Jan 14 13:17:47.113178 kernel: io scheduler bfq registered Jan 14 13:17:47.113192 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:17:47.113206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:17:47.113220 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:17:47.113234 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:17:47.113248 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:17:47.113446 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:17:47.113594 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:17:46 UTC (1736860666) Jan 14 13:17:47.113723 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:17:47.113741 kernel: intel_pstate: CPU model not supported Jan 14 13:17:47.113756 kernel: efifb: probing for efifb Jan 14 13:17:47.113770 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:17:47.113784 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:17:47.113798 kernel: efifb: scrolling: redraw Jan 14 13:17:47.113884 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:17:47.113898 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:17:47.113918 kernel: fb0: EFI VGA frame buffer device Jan 14 13:17:47.113932 kernel: pstore: Using crash dump compression: deflate Jan 14 13:17:47.113946 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:17:47.113964 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:17:47.113978 kernel: Segment Routing with IPv6 Jan 14 13:17:47.113992 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:17:47.114005 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:17:47.114021 kernel: Key type dns_resolver registered Jan 14 13:17:47.114036 kernel: IPI shorthand broadcast: enabled Jan 14 13:17:47.114056 kernel: sched_clock: Marking stable (910005100, 48283000)->(1199334200, -241046100) Jan 14 13:17:47.114071 kernel: registered taskstats version 1 Jan 14 13:17:47.114086 kernel: Loading compiled-in X.509 certificates Jan 14 13:17:47.114100 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:17:47.114115 kernel: Key type .fscrypt registered Jan 14 13:17:47.114129 kernel: Key type fscrypt-provisioning registered Jan 14 13:17:47.114142 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:17:47.114156 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:17:47.114172 kernel: ima: No architecture policies found Jan 14 13:17:47.114185 kernel: clk: Disabling unused clocks Jan 14 13:17:47.114199 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:17:47.114213 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:17:47.114227 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:17:47.114240 kernel: Run /init as init process Jan 14 13:17:47.114254 kernel: with arguments: Jan 14 13:17:47.114267 kernel: /init Jan 14 13:17:47.114280 kernel: with environment: Jan 14 13:17:47.114293 kernel: HOME=/ Jan 14 13:17:47.114309 kernel: TERM=linux Jan 14 13:17:47.114322 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:17:47.114339 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:17:47.114356 systemd[1]: Detected virtualization microsoft. Jan 14 13:17:47.114371 systemd[1]: Detected architecture x86-64. Jan 14 13:17:47.114385 systemd[1]: Running in initrd. Jan 14 13:17:47.114399 systemd[1]: No hostname configured, using default hostname. Jan 14 13:17:47.114416 systemd[1]: Hostname set to . Jan 14 13:17:47.114431 systemd[1]: Initializing machine ID from random generator. Jan 14 13:17:47.114446 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:17:47.114461 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:17:47.114476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:17:47.114491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:17:47.114506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:17:47.114521 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:17:47.114539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:17:47.114555 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:17:47.114571 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:17:47.114586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:17:47.114600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:17:47.114615 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:17:47.114631 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:17:47.114648 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:17:47.114663 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:17:47.114678 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:17:47.114693 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:17:47.114709 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:17:47.114724 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:17:47.114739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:17:47.114754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:17:47.114770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:17:47.114787 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:17:47.114803 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:17:47.114845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:17:47.114860 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:17:47.114876 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:17:47.114891 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:17:47.114934 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:17:47.114972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:17:47.114987 systemd-journald[177]: Journal started Jan 14 13:17:47.115016 systemd-journald[177]: Runtime Journal (/run/log/journal/d4dc6b11a0df472e8bba79e92133438b) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:17:47.130884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:47.135045 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:17:47.143823 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:17:47.152586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:17:47.157914 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:17:47.168064 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:17:47.180077 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:17:47.180108 kernel: Bridge firewalling registered Jan 14 13:17:47.179842 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:17:47.180666 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:17:47.188415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:47.200983 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:17:47.216990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:17:47.221605 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:17:47.223385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:17:47.239077 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:17:47.246749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:17:47.255128 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:47.259892 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:17:47.275068 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:17:47.292378 dracut-cmdline[206]: dracut-dracut-053 Jan 14 13:17:47.295723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:17:47.298049 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:17:47.318498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:17:47.340980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:17:47.379832 kernel: SCSI subsystem initialized Jan 14 13:17:47.389731 systemd-resolved[270]: Positive Trust Anchors: Jan 14 13:17:47.402375 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:17:47.389749 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:17:47.389815 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:17:47.393024 systemd-resolved[270]: Defaulting to hostname 'linux'. Jan 14 13:17:47.394376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:17:47.399111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:17:47.436825 kernel: iscsi: registered transport (tcp) Jan 14 13:17:47.458596 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:17:47.458673 kernel: QLogic iSCSI HBA Driver Jan 14 13:17:47.494524 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:17:47.504031 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:17:47.532154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:17:47.532242 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:17:47.536827 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:17:47.576837 kernel: raid6: avx512x4 gen() 18416 MB/s Jan 14 13:17:47.595822 kernel: raid6: avx512x2 gen() 18402 MB/s Jan 14 13:17:47.614823 kernel: raid6: avx512x1 gen() 18407 MB/s Jan 14 13:17:47.633823 kernel: raid6: avx2x4 gen() 18215 MB/s Jan 14 13:17:47.655820 kernel: raid6: avx2x2 gen() 18098 MB/s Jan 14 13:17:47.675928 kernel: raid6: avx2x1 gen() 13861 MB/s Jan 14 13:17:47.675969 kernel: raid6: using algorithm avx512x4 gen() 18416 MB/s Jan 14 13:17:47.697716 kernel: raid6: .... xor() 7582 MB/s, rmw enabled Jan 14 13:17:47.697762 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:17:47.720836 kernel: xor: automatically using best checksumming function avx Jan 14 13:17:47.872837 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:17:47.882629 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:17:47.896985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:17:47.911329 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 14 13:17:47.915804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:17:47.928979 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:17:47.942386 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 14 13:17:47.970075 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:17:47.982055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:17:48.021078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:17:48.033003 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:17:48.061611 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:17:48.069126 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:17:48.072944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:17:48.076362 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:17:48.103622 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:17:48.125209 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:17:48.130412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:17:48.140734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:17:48.140944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:48.148499 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:17:48.170840 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:17:48.170901 kernel: AES CTR mode by8 optimization enabled Jan 14 13:17:48.158883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:17:48.159113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:48.162550 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:48.183826 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:17:48.184556 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:48.205179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:17:48.208916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:48.257247 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:17:48.257281 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:17:48.257302 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:17:48.257320 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:17:48.257340 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:17:48.257358 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:17:48.257376 kernel: PTP clock support registered Jan 14 13:17:48.239941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:48.271792 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:17:48.271865 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:17:48.278181 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:17:48.437968 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:17:48.438022 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:17:48.438039 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:17:48.438045 systemd-resolved[270]: Clock change detected. Flushing caches. Jan 14 13:17:48.445626 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:17:48.445667 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:17:48.453488 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:17:48.457521 kernel: scsi host0: storvsc_host_t Jan 14 13:17:48.465266 kernel: scsi host1: storvsc_host_t Jan 14 13:17:48.465588 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:17:48.457965 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:48.471810 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:17:48.478591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:17:48.508643 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:17:48.512954 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:17:48.512981 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:17:48.509277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:48.532244 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:17:48.546952 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:17:48.549425 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:17:48.549618 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:17:48.549783 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:17:48.549959 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:17:48.549980 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:17:48.629368 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: VF slot 1 added Jan 14 13:17:48.637368 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:17:48.642375 kernel: hv_pci e797ede1-5438-4a8c-8ba8-70c211e61b82: PCI VMBus probing: Using version 0x10004 Jan 14 13:17:48.688810 kernel: hv_pci e797ede1-5438-4a8c-8ba8-70c211e61b82: PCI host bridge to bus 5438:00 Jan 14 13:17:48.689277 kernel: pci_bus 5438:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:17:48.689513 kernel: pci_bus 5438:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:17:48.689679 kernel: pci 5438:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:17:48.689869 kernel: pci 5438:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:17:48.690031 kernel: pci 5438:00:02.0: enabling Extended Tags Jan 14 13:17:48.690209 kernel: pci 5438:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5438:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:17:48.690395 kernel: pci_bus 5438:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:17:48.690541 kernel: pci 5438:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:17:48.852869 kernel: mlx5_core 5438:00:02.0: enabling device (0000 -> 0002) Jan 14 13:17:49.238606 kernel: mlx5_core 5438:00:02.0: firmware version: 14.30.5000 Jan 14 13:17:49.238813 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: VF registering: eth1 Jan 14 13:17:49.238966 kernel: mlx5_core 5438:00:02.0 eth1: joined to eth0 Jan 14 13:17:49.239137 kernel: mlx5_core 5438:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:17:49.239309 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (466) Jan 14 13:17:49.098336 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:17:49.245909 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (449) Jan 14 13:17:49.217177 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:17:49.267401 kernel: mlx5_core 5438:00:02.0 enP21560s1: renamed from eth1 Jan 14 13:17:49.276566 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:17:49.288285 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:17:49.291909 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:17:49.312522 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:17:49.328368 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:17:50.342908 disk-uuid[606]: The operation has completed successfully. Jan 14 13:17:50.346234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:17:50.414131 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:17:50.414246 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:17:50.444503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:17:50.454784 sh[692]: Success Jan 14 13:17:50.481813 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:17:50.724286 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:17:50.734412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:17:50.739626 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:17:50.772820 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:17:50.772901 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:50.776752 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:17:50.784196 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:17:50.786783 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:17:51.176633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:17:51.184106 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:17:51.200552 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:17:51.209580 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:17:51.223543 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:51.229309 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:51.229388 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:17:51.255849 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:17:51.265895 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:17:51.272542 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:51.280290 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:17:51.295548 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:17:51.323620 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:17:51.336656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:17:51.356900 systemd-networkd[876]: lo: Link UP Jan 14 13:17:51.356910 systemd-networkd[876]: lo: Gained carrier Jan 14 13:17:51.358947 systemd-networkd[876]: Enumeration completed Jan 14 13:17:51.359228 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:17:51.361789 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:17:51.361796 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:17:51.373998 systemd[1]: Reached target network.target - Network. Jan 14 13:17:51.427380 kernel: mlx5_core 5438:00:02.0 enP21560s1: Link up Jan 14 13:17:51.458380 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: Data path switched to VF: enP21560s1 Jan 14 13:17:51.458543 systemd-networkd[876]: enP21560s1: Link UP Jan 14 13:17:51.458683 systemd-networkd[876]: eth0: Link UP Jan 14 13:17:51.458848 systemd-networkd[876]: eth0: Gained carrier Jan 14 13:17:51.458862 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:17:51.465675 systemd-networkd[876]: enP21560s1: Gained carrier Jan 14 13:17:51.510420 systemd-networkd[876]: eth0: DHCPv4 address 10.200.4.13/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:17:52.223879 ignition[831]: Ignition 2.20.0 Jan 14 13:17:52.223893 ignition[831]: Stage: fetch-offline Jan 14 13:17:52.225418 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:17:52.223938 ignition[831]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.223948 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.224055 ignition[831]: parsed url from cmdline: "" Jan 14 13:17:52.224060 ignition[831]: no config URL provided Jan 14 13:17:52.224067 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:17:52.224077 ignition[831]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:17:52.224083 ignition[831]: failed to fetch config: resource requires networking Jan 14 13:17:52.224543 ignition[831]: Ignition finished successfully Jan 14 13:17:52.256556 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:17:52.276606 ignition[884]: Ignition 2.20.0 Jan 14 13:17:52.276619 ignition[884]: Stage: fetch Jan 14 13:17:52.276841 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.276855 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.276962 ignition[884]: parsed url from cmdline: "" Jan 14 13:17:52.276965 ignition[884]: no config URL provided Jan 14 13:17:52.276973 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:17:52.276982 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:17:52.277009 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:17:52.363071 ignition[884]: GET result: OK Jan 14 13:17:52.363189 ignition[884]: config has been read from IMDS userdata Jan 14 13:17:52.363225 ignition[884]: parsing config with SHA512: cacc0c89896528d24afbe0b18513c180aeccc01a8a7ea3c551945014dc978477307a20fa5cb530571fad32332f73ec3bc78126f4bb42dbe2fd7e61f8a056cc2f Jan 14 13:17:52.368556 unknown[884]: fetched base config from "system" Jan 14 13:17:52.368736 unknown[884]: fetched base config from "system" Jan 14 13:17:52.369166 ignition[884]: fetch: fetch complete Jan 14 13:17:52.368746 unknown[884]: fetched user config from "azure" Jan 14 13:17:52.369173 ignition[884]: fetch: fetch passed Jan 14 13:17:52.371003 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:17:52.369224 ignition[884]: Ignition finished successfully Jan 14 13:17:52.391583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:17:52.405387 ignition[890]: Ignition 2.20.0 Jan 14 13:17:52.405399 ignition[890]: Stage: kargs Jan 14 13:17:52.407791 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:17:52.405618 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.405630 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.406528 ignition[890]: kargs: kargs passed Jan 14 13:17:52.406573 ignition[890]: Ignition finished successfully Jan 14 13:17:52.426625 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:17:52.444202 ignition[896]: Ignition 2.20.0 Jan 14 13:17:52.444214 ignition[896]: Stage: disks Jan 14 13:17:52.446307 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:17:52.444455 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.444470 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.445408 ignition[896]: disks: disks passed Jan 14 13:17:52.445457 ignition[896]: Ignition finished successfully Jan 14 13:17:52.462453 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:17:52.465560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:17:52.472127 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:17:52.474833 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:17:52.483287 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:17:52.495589 systemd-networkd[876]: enP21560s1: Gained IPv6LL Jan 14 13:17:52.495830 systemd-networkd[876]: eth0: Gained IPv6LL Jan 14 13:17:52.497698 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:17:52.564682 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:17:52.568578 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:17:52.583481 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:17:52.671653 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:17:52.672270 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:17:52.675248 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:17:52.716488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:17:52.725477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:17:52.727497 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Jan 14 13:17:52.740638 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:52.740726 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:52.743322 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:17:52.744529 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:17:52.756789 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:17:52.750029 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:17:52.750068 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:17:52.767783 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:17:52.777269 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:17:52.789548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:17:53.518712 coreos-metadata[917]: Jan 14 13:17:53.518 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:17:53.523685 coreos-metadata[917]: Jan 14 13:17:53.523 INFO Fetch successful Jan 14 13:17:53.523685 coreos-metadata[917]: Jan 14 13:17:53.523 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:17:53.535903 coreos-metadata[917]: Jan 14 13:17:53.535 INFO Fetch successful Jan 14 13:17:53.561943 coreos-metadata[917]: Jan 14 13:17:53.561 INFO wrote hostname ci-4152.2.0-a-42c09c22a8 to /sysroot/etc/hostname Jan 14 13:17:53.566709 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:17:53.569755 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:17:53.586112 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:17:53.591178 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:17:53.596087 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:17:54.567905 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:17:54.578465 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:17:54.587537 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:17:54.594409 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:54.599585 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:17:54.622705 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:17:54.625491 ignition[1039]: INFO : Ignition 2.20.0 Jan 14 13:17:54.625491 ignition[1039]: INFO : Stage: mount Jan 14 13:17:54.636732 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:54.636732 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:54.636732 ignition[1039]: INFO : mount: mount passed Jan 14 13:17:54.636732 ignition[1039]: INFO : Ignition finished successfully Jan 14 13:17:54.631722 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:17:54.650237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:17:54.658266 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:17:54.674390 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Jan 14 13:17:54.681105 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:54.681177 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:54.684016 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:17:54.689452 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:17:54.690959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:17:54.712529 ignition[1068]: INFO : Ignition 2.20.0 Jan 14 13:17:54.712529 ignition[1068]: INFO : Stage: files Jan 14 13:17:54.717518 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:54.717518 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:54.717518 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:17:54.734189 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:17:54.734189 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:17:54.807812 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:17:54.812259 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:17:54.812259 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:17:54.808371 unknown[1068]: wrote ssh authorized keys file for user: core Jan 14 13:17:54.827850 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:17:54.833368 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:17:54.863455 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:17:55.038921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:17:55.045008 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:17:55.045008 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:17:55.678232 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:17:55.883319 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 13:17:56.444091 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:17:57.444362 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:57.444362 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:17:57.469821 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: files passed Jan 14 13:17:57.479046 ignition[1068]: INFO : Ignition finished successfully Jan 14 13:17:57.471974 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:17:57.492658 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:17:57.500507 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:17:57.516822 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:17:57.536840 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:17:57.536840 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:17:57.516928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:17:57.549787 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:17:57.538543 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:17:57.545726 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:17:57.570601 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:17:57.597178 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:17:57.597305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:17:57.607875 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:17:57.608042 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:17:57.609021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:17:57.626120 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:17:57.640778 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:17:57.651607 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:17:57.665047 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:17:57.671558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:17:57.671762 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:17:57.672179 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:17:57.672290 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:17:57.674050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:17:57.674698 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:17:57.675167 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:17:57.675644 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:17:57.676152 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:17:57.676642 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:17:57.677105 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:17:57.677626 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:17:57.678078 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:17:57.678672 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:17:57.679195 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:17:57.679331 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:17:57.680734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:17:57.681188 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:17:57.682028 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:17:57.721429 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:17:57.725344 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:17:57.725526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:17:57.787380 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:17:57.787601 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:17:57.798120 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:17:57.798334 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:17:57.806901 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:17:57.807313 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:17:57.824652 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:17:57.832603 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:17:57.835736 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:17:57.836179 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:17:57.847394 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:17:57.865682 ignition[1120]: INFO : Ignition 2.20.0 Jan 14 13:17:57.865682 ignition[1120]: INFO : Stage: umount Jan 14 13:17:57.865682 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:57.865682 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:57.865682 ignition[1120]: INFO : umount: umount passed Jan 14 13:17:57.865682 ignition[1120]: INFO : Ignition finished successfully Jan 14 13:17:57.847574 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:17:57.853569 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:17:57.853688 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:17:57.859072 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:17:57.859333 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:17:57.865740 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:17:57.865800 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:17:57.870572 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:17:57.870625 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:17:57.875424 systemd[1]: Stopped target network.target - Network. Jan 14 13:17:57.880396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:17:57.880451 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:17:57.919756 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:17:57.925107 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:17:57.928339 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:17:57.930486 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:17:57.940011 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:17:57.942512 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:17:57.942569 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:17:57.950991 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:17:57.951044 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:17:57.956205 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:17:57.956274 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:17:57.963244 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:17:57.965531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:17:57.980946 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:17:57.983733 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:17:57.990507 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:17:57.991133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:17:57.991220 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:17:57.991893 systemd-networkd[876]: eth0: DHCPv6 lease lost Jan 14 13:17:57.996412 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:17:57.996506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:17:58.001747 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:17:58.001787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:17:58.024045 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:17:58.029986 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:17:58.030078 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:17:58.036345 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:17:58.043011 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:17:58.043114 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:17:58.062485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:17:58.062618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:17:58.066406 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:17:58.068901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:17:58.074865 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:17:58.074933 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:17:58.078888 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:17:58.107059 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: Data path switched from VF: enP21560s1 Jan 14 13:17:58.079018 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:17:58.085848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:17:58.085936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:17:58.091263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:17:58.093935 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:17:58.100251 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:17:58.100301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:17:58.110675 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:17:58.110725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:17:58.140811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:17:58.140915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:58.156513 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:17:58.160404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:17:58.160484 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:17:58.171327 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:17:58.171426 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:17:58.178024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:17:58.178085 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:17:58.178824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:17:58.178857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:58.181994 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:17:58.182103 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:17:58.182428 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:17:58.182507 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:17:58.726411 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:17:58.726589 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:17:58.729721 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:17:58.734120 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:17:58.734193 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:17:58.749595 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:17:58.759091 systemd[1]: Switching root. Jan 14 13:17:58.846108 systemd-journald[177]: Journal stopped Jan 14 13:17:47.109255 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:17:47.109283 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:17:47.109294 kernel: BIOS-provided physical RAM map: Jan 14 13:17:47.109302 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:17:47.109309 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:17:47.109315 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:17:47.109323 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:17:47.109334 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:17:47.109340 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:17:47.109346 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:17:47.109356 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:17:47.109363 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:17:47.109369 kernel: NX (Execute Disable) protection: active Jan 14 13:17:47.109378 kernel: APIC: Static calls initialized Jan 14 13:17:47.109389 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:17:47.109397 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:17:47.109407 kernel: random: crng init done Jan 14 13:17:47.109414 kernel: secureboot: Secure boot disabled Jan 14 13:17:47.109420 kernel: SMBIOS 3.1.0 present. Jan 14 13:17:47.109431 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:17:47.109438 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:17:47.109445 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:17:47.109452 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:17:47.109462 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:17:47.109471 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:17:47.109479 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:17:47.109488 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:17:47.109495 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:17:47.109504 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:17:47.109513 kernel: tsc: Detected 2593.906 MHz processor Jan 14 13:17:47.109520 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:17:47.109529 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:17:47.109538 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:17:47.109548 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:17:47.109559 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:17:47.109566 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:17:47.109573 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:17:47.109583 kernel: Using GB pages for direct mapping Jan 14 13:17:47.109591 kernel: ACPI: Early table checksum verification disabled Jan 14 13:17:47.109601 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:17:47.109615 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109630 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109645 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:17:47.109659 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:17:47.109674 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109690 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109707 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109732 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109748 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109767 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109781 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:17:47.109796 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:17:47.109820 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:17:47.109834 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:17:47.109849 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:17:47.109866 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:17:47.109888 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:17:47.109904 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:17:47.109922 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:17:47.109939 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:17:47.109953 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:17:47.109971 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:17:47.109986 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:17:47.110002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:17:47.110026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:17:47.110043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:17:47.110057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:17:47.110073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:17:47.110088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:17:47.110105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:17:47.110121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:17:47.110136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:17:47.110152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:17:47.110173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:17:47.110190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:17:47.110206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:17:47.110221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:17:47.110235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:17:47.110253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:17:47.110269 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:17:47.110285 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:17:47.110300 kernel: Zone ranges: Jan 14 13:17:47.110322 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:17:47.110336 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:17:47.110348 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:17:47.110362 kernel: Movable zone start for each node Jan 14 13:17:47.110377 kernel: Early memory node ranges Jan 14 13:17:47.110392 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:17:47.110409 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:17:47.110426 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:17:47.110443 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:17:47.110464 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:17:47.110480 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:17:47.110496 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:17:47.110510 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:17:47.110525 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:17:47.110539 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:17:47.110554 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:17:47.110571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:17:47.110587 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:17:47.110613 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:17:47.110630 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:17:47.110647 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:17:47.110663 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:17:47.110678 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:17:47.110694 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:17:47.110708 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:17:47.110723 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:17:47.110738 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:17:47.110760 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:17:47.110773 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:17:47.110790 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:17:47.110814 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:17:47.110830 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:17:47.110845 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:17:47.110860 kernel: Fallback order for Node 0: 0 Jan 14 13:17:47.110878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:17:47.110902 kernel: Policy zone: Normal Jan 14 13:17:47.110933 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:17:47.110950 kernel: software IO TLB: area num 2. Jan 14 13:17:47.110971 kernel: Memory: 8077088K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 310116K reserved, 0K cma-reserved) Jan 14 13:17:47.110987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:17:47.111003 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:17:47.111021 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:17:47.111036 kernel: Dynamic Preempt: voluntary Jan 14 13:17:47.111052 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:17:47.111071 kernel: rcu: RCU event tracing is enabled. Jan 14 13:17:47.111084 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:17:47.111099 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:17:47.111111 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:17:47.111123 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:17:47.111135 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:17:47.111148 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:17:47.111165 kernel: Using NULL legacy PIC Jan 14 13:17:47.111179 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:17:47.111193 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:17:47.111208 kernel: Console: colour dummy device 80x25 Jan 14 13:17:47.111223 kernel: printk: console [tty1] enabled Jan 14 13:17:47.111238 kernel: printk: console [ttyS0] enabled Jan 14 13:17:47.111255 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:17:47.111268 kernel: ACPI: Core revision 20230628 Jan 14 13:17:47.111282 kernel: Failed to register legacy timer interrupt Jan 14 13:17:47.111295 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:17:47.111313 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:17:47.111333 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:17:47.111346 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:17:47.111359 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:17:47.111372 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:17:47.111386 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:17:47.111401 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:17:47.111414 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:17:47.111429 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 13:17:47.111446 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:17:47.111460 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:17:47.111475 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:17:47.111489 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:17:47.111503 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:17:47.111518 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:17:47.111533 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:17:47.111548 kernel: RETBleed: Vulnerable Jan 14 13:17:47.111561 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:17:47.111574 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:17:47.111589 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:17:47.111602 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:17:47.111616 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:17:47.111631 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:17:47.111646 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:17:47.111661 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:17:47.111676 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:17:47.111690 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:17:47.111702 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:17:47.111715 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:17:47.111728 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:17:47.111745 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:17:47.111757 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:17:47.111772 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:17:47.111785 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:17:47.111798 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:17:47.111824 kernel: landlock: Up and running. Jan 14 13:17:47.111838 kernel: SELinux: Initializing. Jan 14 13:17:47.111852 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.111866 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.111879 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:17:47.111893 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:17:47.111913 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:17:47.111927 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:17:47.111943 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:17:47.111957 kernel: signal: max sigframe size: 3632 Jan 14 13:17:47.111972 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:17:47.111988 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:17:47.112003 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:17:47.112018 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:17:47.112034 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:17:47.112053 kernel: .... node #0, CPUs: #1 Jan 14 13:17:47.112069 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:17:47.112085 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:17:47.112100 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:17:47.112115 kernel: smpboot: Max logical packages: 1 Jan 14 13:17:47.112131 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:17:47.112146 kernel: devtmpfs: initialized Jan 14 13:17:47.112161 kernel: x86/mm: Memory block size: 128MB Jan 14 13:17:47.112180 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:17:47.112196 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:17:47.112212 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:17:47.112226 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:17:47.112241 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:17:47.112257 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:17:47.112276 kernel: audit: type=2000 audit(1736860665.029:1): state=initialized audit_enabled=0 res=1 Jan 14 13:17:47.112291 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:17:47.112306 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:17:47.112325 kernel: cpuidle: using governor menu Jan 14 13:17:47.112340 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:17:47.112355 kernel: dca service started, version 1.12.1 Jan 14 13:17:47.112369 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:17:47.112385 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:17:47.112399 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:17:47.112415 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:17:47.112429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:17:47.112444 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:17:47.112463 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:17:47.112478 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:17:47.112493 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:17:47.112508 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:17:47.112523 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:17:47.112539 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:17:47.112553 kernel: ACPI: Interpreter enabled Jan 14 13:17:47.112568 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:17:47.112583 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:17:47.112602 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:17:47.112618 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:17:47.112632 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:17:47.112647 kernel: iommu: Default domain type: Translated Jan 14 13:17:47.112663 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:17:47.112678 kernel: efivars: Registered efivars operations Jan 14 13:17:47.112694 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:17:47.112710 kernel: PCI: System does not support PCI Jan 14 13:17:47.112723 kernel: vgaarb: loaded Jan 14 13:17:47.112742 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:17:47.112757 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:17:47.112772 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:17:47.112788 kernel: pnp: PnP ACPI init Jan 14 13:17:47.112802 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:17:47.112827 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:17:47.112842 kernel: NET: Registered PF_INET protocol family Jan 14 13:17:47.112857 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:17:47.112872 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:17:47.112892 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:17:47.112907 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:17:47.112922 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:17:47.112937 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:17:47.112953 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.112968 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:17:47.112983 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:17:47.112997 kernel: NET: Registered PF_XDP protocol family Jan 14 13:17:47.113013 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:17:47.113031 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:17:47.113046 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:17:47.113061 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:17:47.113075 kernel: Initialise system trusted keyrings Jan 14 13:17:47.113090 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:17:47.113105 kernel: Key type asymmetric registered Jan 14 13:17:47.113120 kernel: Asymmetric key parser 'x509' registered Jan 14 13:17:47.113134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:17:47.113147 kernel: io scheduler mq-deadline registered Jan 14 13:17:47.113164 kernel: io scheduler kyber registered Jan 14 13:17:47.113178 kernel: io scheduler bfq registered Jan 14 13:17:47.113192 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:17:47.113206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:17:47.113220 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:17:47.113234 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:17:47.113248 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:17:47.113446 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:17:47.113594 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:17:46 UTC (1736860666) Jan 14 13:17:47.113723 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:17:47.113741 kernel: intel_pstate: CPU model not supported Jan 14 13:17:47.113756 kernel: efifb: probing for efifb Jan 14 13:17:47.113770 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:17:47.113784 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:17:47.113798 kernel: efifb: scrolling: redraw Jan 14 13:17:47.113884 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:17:47.113898 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:17:47.113918 kernel: fb0: EFI VGA frame buffer device Jan 14 13:17:47.113932 kernel: pstore: Using crash dump compression: deflate Jan 14 13:17:47.113946 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:17:47.113964 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:17:47.113978 kernel: Segment Routing with IPv6 Jan 14 13:17:47.113992 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:17:47.114005 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:17:47.114021 kernel: Key type dns_resolver registered Jan 14 13:17:47.114036 kernel: IPI shorthand broadcast: enabled Jan 14 13:17:47.114056 kernel: sched_clock: Marking stable (910005100, 48283000)->(1199334200, -241046100) Jan 14 13:17:47.114071 kernel: registered taskstats version 1 Jan 14 13:17:47.114086 kernel: Loading compiled-in X.509 certificates Jan 14 13:17:47.114100 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:17:47.114115 kernel: Key type .fscrypt registered Jan 14 13:17:47.114129 kernel: Key type fscrypt-provisioning registered Jan 14 13:17:47.114142 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:17:47.114156 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:17:47.114172 kernel: ima: No architecture policies found Jan 14 13:17:47.114185 kernel: clk: Disabling unused clocks Jan 14 13:17:47.114199 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:17:47.114213 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:17:47.114227 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:17:47.114240 kernel: Run /init as init process Jan 14 13:17:47.114254 kernel: with arguments: Jan 14 13:17:47.114267 kernel: /init Jan 14 13:17:47.114280 kernel: with environment: Jan 14 13:17:47.114293 kernel: HOME=/ Jan 14 13:17:47.114309 kernel: TERM=linux Jan 14 13:17:47.114322 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:17:47.114339 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:17:47.114356 systemd[1]: Detected virtualization microsoft. Jan 14 13:17:47.114371 systemd[1]: Detected architecture x86-64. Jan 14 13:17:47.114385 systemd[1]: Running in initrd. Jan 14 13:17:47.114399 systemd[1]: No hostname configured, using default hostname. Jan 14 13:17:47.114416 systemd[1]: Hostname set to . Jan 14 13:17:47.114431 systemd[1]: Initializing machine ID from random generator. Jan 14 13:17:47.114446 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:17:47.114461 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:17:47.114476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:17:47.114491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:17:47.114506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:17:47.114521 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:17:47.114539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:17:47.114555 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:17:47.114571 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:17:47.114586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:17:47.114600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:17:47.114615 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:17:47.114631 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:17:47.114648 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:17:47.114663 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:17:47.114678 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:17:47.114693 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:17:47.114709 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:17:47.114724 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:17:47.114739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:17:47.114754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:17:47.114770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:17:47.114787 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:17:47.114803 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:17:47.114845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:17:47.114860 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:17:47.114876 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:17:47.114891 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:17:47.114934 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:17:47.114972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:17:47.114987 systemd-journald[177]: Journal started Jan 14 13:17:47.115016 systemd-journald[177]: Runtime Journal (/run/log/journal/d4dc6b11a0df472e8bba79e92133438b) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:17:47.130884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:47.135045 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:17:47.143823 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:17:47.152586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:17:47.157914 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:17:47.168064 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:17:47.180077 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:17:47.180108 kernel: Bridge firewalling registered Jan 14 13:17:47.179842 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:17:47.180666 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:17:47.188415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:47.200983 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:17:47.216990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:17:47.221605 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:17:47.223385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:17:47.239077 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:17:47.246749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:17:47.255128 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:47.259892 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:17:47.275068 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:17:47.292378 dracut-cmdline[206]: dracut-dracut-053 Jan 14 13:17:47.295723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:17:47.298049 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:17:47.318498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:17:47.340980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:17:47.379832 kernel: SCSI subsystem initialized Jan 14 13:17:47.389731 systemd-resolved[270]: Positive Trust Anchors: Jan 14 13:17:47.402375 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:17:47.389749 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:17:47.389815 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:17:47.393024 systemd-resolved[270]: Defaulting to hostname 'linux'. Jan 14 13:17:47.394376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:17:47.399111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:17:47.436825 kernel: iscsi: registered transport (tcp) Jan 14 13:17:47.458596 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:17:47.458673 kernel: QLogic iSCSI HBA Driver Jan 14 13:17:47.494524 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:17:47.504031 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:17:47.532154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:17:47.532242 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:17:47.536827 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:17:47.576837 kernel: raid6: avx512x4 gen() 18416 MB/s Jan 14 13:17:47.595822 kernel: raid6: avx512x2 gen() 18402 MB/s Jan 14 13:17:47.614823 kernel: raid6: avx512x1 gen() 18407 MB/s Jan 14 13:17:47.633823 kernel: raid6: avx2x4 gen() 18215 MB/s Jan 14 13:17:47.655820 kernel: raid6: avx2x2 gen() 18098 MB/s Jan 14 13:17:47.675928 kernel: raid6: avx2x1 gen() 13861 MB/s Jan 14 13:17:47.675969 kernel: raid6: using algorithm avx512x4 gen() 18416 MB/s Jan 14 13:17:47.697716 kernel: raid6: .... xor() 7582 MB/s, rmw enabled Jan 14 13:17:47.697762 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:17:47.720836 kernel: xor: automatically using best checksumming function avx Jan 14 13:17:47.872837 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:17:47.882629 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:17:47.896985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:17:47.911329 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 14 13:17:47.915804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:17:47.928979 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:17:47.942386 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 14 13:17:47.970075 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:17:47.982055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:17:48.021078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:17:48.033003 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:17:48.061611 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:17:48.069126 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:17:48.072944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:17:48.076362 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:17:48.103622 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:17:48.125209 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:17:48.130412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:17:48.140734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:17:48.140944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:48.148499 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:17:48.170840 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:17:48.170901 kernel: AES CTR mode by8 optimization enabled Jan 14 13:17:48.158883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:17:48.159113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:48.162550 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:48.183826 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:17:48.184556 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:48.205179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:17:48.208916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:48.257247 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:17:48.257281 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:17:48.257302 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:17:48.257320 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:17:48.257340 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:17:48.257358 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:17:48.257376 kernel: PTP clock support registered Jan 14 13:17:48.239941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:17:48.271792 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:17:48.271865 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:17:48.278181 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:17:48.437968 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:17:48.438022 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:17:48.438039 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:17:48.438045 systemd-resolved[270]: Clock change detected. Flushing caches. Jan 14 13:17:48.445626 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:17:48.445667 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:17:48.453488 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:17:48.457521 kernel: scsi host0: storvsc_host_t Jan 14 13:17:48.465266 kernel: scsi host1: storvsc_host_t Jan 14 13:17:48.465588 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:17:48.457965 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:48.471810 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:17:48.478591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:17:48.508643 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:17:48.512954 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:17:48.512981 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:17:48.509277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:48.532244 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:17:48.546952 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:17:48.549425 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:17:48.549618 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:17:48.549783 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:17:48.549959 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:17:48.549980 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:17:48.629368 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: VF slot 1 added Jan 14 13:17:48.637368 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:17:48.642375 kernel: hv_pci e797ede1-5438-4a8c-8ba8-70c211e61b82: PCI VMBus probing: Using version 0x10004 Jan 14 13:17:48.688810 kernel: hv_pci e797ede1-5438-4a8c-8ba8-70c211e61b82: PCI host bridge to bus 5438:00 Jan 14 13:17:48.689277 kernel: pci_bus 5438:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:17:48.689513 kernel: pci_bus 5438:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:17:48.689679 kernel: pci 5438:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:17:48.689869 kernel: pci 5438:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:17:48.690031 kernel: pci 5438:00:02.0: enabling Extended Tags Jan 14 13:17:48.690209 kernel: pci 5438:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5438:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:17:48.690395 kernel: pci_bus 5438:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:17:48.690541 kernel: pci 5438:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:17:48.852869 kernel: mlx5_core 5438:00:02.0: enabling device (0000 -> 0002) Jan 14 13:17:49.238606 kernel: mlx5_core 5438:00:02.0: firmware version: 14.30.5000 Jan 14 13:17:49.238813 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: VF registering: eth1 Jan 14 13:17:49.238966 kernel: mlx5_core 5438:00:02.0 eth1: joined to eth0 Jan 14 13:17:49.239137 kernel: mlx5_core 5438:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:17:49.239309 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (466) Jan 14 13:17:49.098336 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:17:49.245909 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (449) Jan 14 13:17:49.217177 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:17:49.267401 kernel: mlx5_core 5438:00:02.0 enP21560s1: renamed from eth1 Jan 14 13:17:49.276566 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:17:49.288285 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:17:49.291909 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:17:49.312522 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:17:49.328368 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:17:50.342908 disk-uuid[606]: The operation has completed successfully. Jan 14 13:17:50.346234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:17:50.414131 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:17:50.414246 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:17:50.444503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:17:50.454784 sh[692]: Success Jan 14 13:17:50.481813 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:17:50.724286 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:17:50.734412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:17:50.739626 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:17:50.772820 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:17:50.772901 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:50.776752 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:17:50.784196 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:17:50.786783 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:17:51.176633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:17:51.184106 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:17:51.200552 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:17:51.209580 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:17:51.223543 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:51.229309 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:51.229388 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:17:51.255849 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:17:51.265895 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:17:51.272542 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:51.280290 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:17:51.295548 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:17:51.323620 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:17:51.336656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:17:51.356900 systemd-networkd[876]: lo: Link UP Jan 14 13:17:51.356910 systemd-networkd[876]: lo: Gained carrier Jan 14 13:17:51.358947 systemd-networkd[876]: Enumeration completed Jan 14 13:17:51.359228 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:17:51.361789 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:17:51.361796 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:17:51.373998 systemd[1]: Reached target network.target - Network. Jan 14 13:17:51.427380 kernel: mlx5_core 5438:00:02.0 enP21560s1: Link up Jan 14 13:17:51.458380 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: Data path switched to VF: enP21560s1 Jan 14 13:17:51.458543 systemd-networkd[876]: enP21560s1: Link UP Jan 14 13:17:51.458683 systemd-networkd[876]: eth0: Link UP Jan 14 13:17:51.458848 systemd-networkd[876]: eth0: Gained carrier Jan 14 13:17:51.458862 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:17:51.465675 systemd-networkd[876]: enP21560s1: Gained carrier Jan 14 13:17:51.510420 systemd-networkd[876]: eth0: DHCPv4 address 10.200.4.13/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:17:52.223879 ignition[831]: Ignition 2.20.0 Jan 14 13:17:52.223893 ignition[831]: Stage: fetch-offline Jan 14 13:17:52.225418 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:17:52.223938 ignition[831]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.223948 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.224055 ignition[831]: parsed url from cmdline: "" Jan 14 13:17:52.224060 ignition[831]: no config URL provided Jan 14 13:17:52.224067 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:17:52.224077 ignition[831]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:17:52.224083 ignition[831]: failed to fetch config: resource requires networking Jan 14 13:17:52.224543 ignition[831]: Ignition finished successfully Jan 14 13:17:52.256556 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:17:52.276606 ignition[884]: Ignition 2.20.0 Jan 14 13:17:52.276619 ignition[884]: Stage: fetch Jan 14 13:17:52.276841 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.276855 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.276962 ignition[884]: parsed url from cmdline: "" Jan 14 13:17:52.276965 ignition[884]: no config URL provided Jan 14 13:17:52.276973 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:17:52.276982 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:17:52.277009 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:17:52.363071 ignition[884]: GET result: OK Jan 14 13:17:52.363189 ignition[884]: config has been read from IMDS userdata Jan 14 13:17:52.363225 ignition[884]: parsing config with SHA512: cacc0c89896528d24afbe0b18513c180aeccc01a8a7ea3c551945014dc978477307a20fa5cb530571fad32332f73ec3bc78126f4bb42dbe2fd7e61f8a056cc2f Jan 14 13:17:52.368556 unknown[884]: fetched base config from "system" Jan 14 13:17:52.368736 unknown[884]: fetched base config from "system" Jan 14 13:17:52.369166 ignition[884]: fetch: fetch complete Jan 14 13:17:52.368746 unknown[884]: fetched user config from "azure" Jan 14 13:17:52.369173 ignition[884]: fetch: fetch passed Jan 14 13:17:52.371003 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:17:52.369224 ignition[884]: Ignition finished successfully Jan 14 13:17:52.391583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:17:52.405387 ignition[890]: Ignition 2.20.0 Jan 14 13:17:52.405399 ignition[890]: Stage: kargs Jan 14 13:17:52.407791 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:17:52.405618 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.405630 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.406528 ignition[890]: kargs: kargs passed Jan 14 13:17:52.406573 ignition[890]: Ignition finished successfully Jan 14 13:17:52.426625 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:17:52.444202 ignition[896]: Ignition 2.20.0 Jan 14 13:17:52.444214 ignition[896]: Stage: disks Jan 14 13:17:52.446307 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:17:52.444455 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:52.444470 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:52.445408 ignition[896]: disks: disks passed Jan 14 13:17:52.445457 ignition[896]: Ignition finished successfully Jan 14 13:17:52.462453 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:17:52.465560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:17:52.472127 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:17:52.474833 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:17:52.483287 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:17:52.495589 systemd-networkd[876]: enP21560s1: Gained IPv6LL Jan 14 13:17:52.495830 systemd-networkd[876]: eth0: Gained IPv6LL Jan 14 13:17:52.497698 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:17:52.564682 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:17:52.568578 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:17:52.583481 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:17:52.671653 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:17:52.672270 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:17:52.675248 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:17:52.716488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:17:52.725477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:17:52.727497 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Jan 14 13:17:52.740638 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:52.740726 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:52.743322 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:17:52.744529 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:17:52.756789 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:17:52.750029 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:17:52.750068 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:17:52.767783 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:17:52.777269 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:17:52.789548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:17:53.518712 coreos-metadata[917]: Jan 14 13:17:53.518 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:17:53.523685 coreos-metadata[917]: Jan 14 13:17:53.523 INFO Fetch successful Jan 14 13:17:53.523685 coreos-metadata[917]: Jan 14 13:17:53.523 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:17:53.535903 coreos-metadata[917]: Jan 14 13:17:53.535 INFO Fetch successful Jan 14 13:17:53.561943 coreos-metadata[917]: Jan 14 13:17:53.561 INFO wrote hostname ci-4152.2.0-a-42c09c22a8 to /sysroot/etc/hostname Jan 14 13:17:53.566709 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:17:53.569755 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:17:53.586112 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:17:53.591178 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:17:53.596087 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:17:54.567905 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:17:54.578465 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:17:54.587537 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:17:54.594409 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:54.599585 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:17:54.622705 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:17:54.625491 ignition[1039]: INFO : Ignition 2.20.0 Jan 14 13:17:54.625491 ignition[1039]: INFO : Stage: mount Jan 14 13:17:54.636732 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:54.636732 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:54.636732 ignition[1039]: INFO : mount: mount passed Jan 14 13:17:54.636732 ignition[1039]: INFO : Ignition finished successfully Jan 14 13:17:54.631722 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:17:54.650237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:17:54.658266 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:17:54.674390 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Jan 14 13:17:54.681105 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:17:54.681177 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:17:54.684016 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:17:54.689452 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:17:54.690959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:17:54.712529 ignition[1068]: INFO : Ignition 2.20.0 Jan 14 13:17:54.712529 ignition[1068]: INFO : Stage: files Jan 14 13:17:54.717518 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:54.717518 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:54.717518 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:17:54.734189 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:17:54.734189 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:17:54.807812 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:17:54.812259 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:17:54.812259 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:17:54.808371 unknown[1068]: wrote ssh authorized keys file for user: core Jan 14 13:17:54.827850 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:17:54.833368 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:17:54.863455 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:17:55.038921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:17:55.045008 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:17:55.045008 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:17:55.678232 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:17:55.883319 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:17:55.889388 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:55.928774 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 13:17:56.444091 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:17:57.444362 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:17:57.444362 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:17:57.469821 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:17:57.479046 ignition[1068]: INFO : files: files passed Jan 14 13:17:57.479046 ignition[1068]: INFO : Ignition finished successfully Jan 14 13:17:57.471974 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:17:57.492658 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:17:57.500507 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:17:57.516822 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:17:57.536840 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:17:57.536840 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:17:57.516928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:17:57.549787 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:17:57.538543 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:17:57.545726 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:17:57.570601 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:17:57.597178 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:17:57.597305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:17:57.607875 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:17:57.608042 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:17:57.609021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:17:57.626120 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:17:57.640778 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:17:57.651607 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:17:57.665047 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:17:57.671558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:17:57.671762 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:17:57.672179 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:17:57.672290 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:17:57.674050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:17:57.674698 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:17:57.675167 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:17:57.675644 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:17:57.676152 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:17:57.676642 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:17:57.677105 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:17:57.677626 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:17:57.678078 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:17:57.678672 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:17:57.679195 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:17:57.679331 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:17:57.680734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:17:57.681188 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:17:57.682028 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:17:57.721429 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:17:57.725344 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:17:57.725526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:17:57.787380 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:17:57.787601 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:17:57.798120 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:17:57.798334 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:17:57.806901 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:17:57.807313 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:17:57.824652 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:17:57.832603 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:17:57.835736 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:17:57.836179 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:17:57.847394 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:17:57.865682 ignition[1120]: INFO : Ignition 2.20.0 Jan 14 13:17:57.865682 ignition[1120]: INFO : Stage: umount Jan 14 13:17:57.865682 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:17:57.865682 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:17:57.865682 ignition[1120]: INFO : umount: umount passed Jan 14 13:17:57.865682 ignition[1120]: INFO : Ignition finished successfully Jan 14 13:17:57.847574 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:17:57.853569 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:17:57.853688 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:17:57.859072 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:17:57.859333 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:17:57.865740 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:17:57.865800 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:17:57.870572 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:17:57.870625 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:17:57.875424 systemd[1]: Stopped target network.target - Network. Jan 14 13:17:57.880396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:17:57.880451 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:17:57.919756 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:17:57.925107 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:17:57.928339 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:17:57.930486 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:17:57.940011 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:17:57.942512 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:17:57.942569 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:17:57.950991 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:17:57.951044 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:17:57.956205 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:17:57.956274 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:17:57.963244 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:17:57.965531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:17:57.980946 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:17:57.983733 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:17:57.990507 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:17:57.991133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:17:57.991220 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:17:57.991893 systemd-networkd[876]: eth0: DHCPv6 lease lost Jan 14 13:17:57.996412 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:17:57.996506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:17:58.001747 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:17:58.001787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:17:58.024045 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:17:58.029986 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:17:58.030078 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:17:58.036345 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:17:58.043011 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:17:58.043114 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:17:58.062485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:17:58.062618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:17:58.066406 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:17:58.068901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:17:58.074865 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:17:58.074933 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:17:58.078888 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:17:58.107059 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: Data path switched from VF: enP21560s1 Jan 14 13:17:58.079018 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:17:58.085848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:17:58.085936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:17:58.091263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:17:58.093935 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:17:58.100251 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:17:58.100301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:17:58.110675 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:17:58.110725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:17:58.140811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:17:58.140915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:17:58.156513 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:17:58.160404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:17:58.160484 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:17:58.171327 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:17:58.171426 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:17:58.178024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:17:58.178085 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:17:58.178824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:17:58.178857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:17:58.181994 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:17:58.182103 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:17:58.182428 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:17:58.182507 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:17:58.726411 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:17:58.726589 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:17:58.729721 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:17:58.734120 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:17:58.734193 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:17:58.749595 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:17:58.759091 systemd[1]: Switching root. Jan 14 13:17:58.846108 systemd-journald[177]: Journal stopped Jan 14 13:18:04.576307 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:18:04.576345 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:18:04.576369 kernel: SELinux: policy capability open_perms=1 Jan 14 13:18:04.576381 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:18:04.576392 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:18:04.576403 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:18:04.576417 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:18:04.576433 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:18:04.576445 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:18:04.576458 kernel: audit: type=1403 audit(1736860680.518:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:18:04.576472 systemd[1]: Successfully loaded SELinux policy in 148.752ms. Jan 14 13:18:04.576488 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.153ms. Jan 14 13:18:04.576505 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:18:04.576522 systemd[1]: Detected virtualization microsoft. Jan 14 13:18:04.576543 systemd[1]: Detected architecture x86-64. Jan 14 13:18:04.576560 systemd[1]: Detected first boot. Jan 14 13:18:04.576577 systemd[1]: Hostname set to . Jan 14 13:18:04.576593 systemd[1]: Initializing machine ID from random generator. Jan 14 13:18:04.576610 zram_generator::config[1162]: No configuration found. Jan 14 13:18:04.576629 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:18:04.576644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:18:04.576655 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:18:04.576667 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:18:04.576678 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:18:04.576690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:18:04.576702 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:18:04.576716 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:18:04.576728 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:18:04.576738 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:18:04.576748 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:18:04.576761 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:18:04.576771 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:18:04.576785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:18:04.576795 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:18:04.576809 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:18:04.576820 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:18:04.576833 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:18:04.576842 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:18:04.576853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:18:04.576864 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:18:04.576880 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:18:04.576890 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:18:04.576905 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:18:04.576915 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:18:04.576927 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:18:04.576938 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:18:04.576948 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:18:04.576960 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:18:04.576970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:18:04.576985 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:18:04.576996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:18:04.577009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:18:04.577019 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:18:04.577033 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:18:04.577046 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:18:04.577059 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:18:04.577069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:04.577079 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:18:04.577089 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:18:04.577099 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:18:04.577109 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:18:04.577119 systemd[1]: Reached target machines.target - Containers. Jan 14 13:18:04.577132 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:18:04.577142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:18:04.577155 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:18:04.577165 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:18:04.577175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:18:04.577185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:18:04.577195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:18:04.577205 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:18:04.577215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:18:04.577227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:18:04.577237 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:18:04.577248 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:18:04.577261 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:18:04.577274 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:18:04.577284 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:18:04.577294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:18:04.577307 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:18:04.577320 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:18:04.577333 kernel: fuse: init (API version 7.39) Jan 14 13:18:04.577342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:18:04.577365 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 13:18:04.577379 systemd[1]: Stopped verity-setup.service. Jan 14 13:18:04.577389 kernel: loop: module loaded Jan 14 13:18:04.577402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:04.577412 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:18:04.577425 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:18:04.577438 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:18:04.577473 systemd-journald[1254]: Collecting audit messages is disabled. Jan 14 13:18:04.577496 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:18:04.577509 systemd-journald[1254]: Journal started Jan 14 13:18:04.577532 systemd-journald[1254]: Runtime Journal (/run/log/journal/00b0e9cf051240b2ab480d0057f78b31) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:18:03.826156 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:18:03.980083 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:18:03.980491 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:18:04.608269 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:18:04.593250 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:18:04.596802 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:18:04.599916 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:18:04.603883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:18:04.608083 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:18:04.608676 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:18:04.615960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:18:04.616254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:18:04.620144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:18:04.620470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:18:04.624299 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:18:04.624605 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:18:04.628317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:18:04.628640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:18:04.632412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:18:04.636248 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:18:04.640460 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:18:04.660731 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:18:04.671425 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:18:04.686416 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:18:04.690096 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:18:04.690137 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:18:04.696040 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:18:04.706374 kernel: ACPI: bus type drm_connector registered Jan 14 13:18:04.708514 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:18:04.714230 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:18:04.717681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:18:04.719714 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:18:04.727709 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:18:04.731111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:18:04.732725 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:18:04.736125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:18:04.739397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:18:04.747486 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:18:04.752520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:18:04.758086 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:18:04.758266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:18:04.782099 systemd-journald[1254]: Time spent on flushing to /var/log/journal/00b0e9cf051240b2ab480d0057f78b31 is 33.216ms for 960 entries. Jan 14 13:18:04.782099 systemd-journald[1254]: System Journal (/var/log/journal/00b0e9cf051240b2ab480d0057f78b31) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:18:04.836688 systemd-journald[1254]: Received client request to flush runtime journal. Jan 14 13:18:04.770928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:18:04.775486 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:18:04.779320 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:18:04.793225 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:18:04.797184 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:18:04.806710 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:18:04.820522 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:18:04.832538 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:18:04.839080 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:18:04.845490 udevadm[1309]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 14 13:18:04.863665 kernel: loop0: detected capacity change from 0 to 28272 Jan 14 13:18:04.866489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:18:04.872674 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 14 13:18:04.872701 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 14 13:18:04.880208 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:18:04.892679 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:18:04.906504 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:18:04.907265 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:18:05.033428 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:18:05.042605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:18:05.062173 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 14 13:18:05.062631 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 14 13:18:05.068628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:18:05.206380 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:18:05.275462 kernel: loop1: detected capacity change from 0 to 140992 Jan 14 13:18:05.713379 kernel: loop2: detected capacity change from 0 to 210664 Jan 14 13:18:05.758382 kernel: loop3: detected capacity change from 0 to 138184 Jan 14 13:18:06.071956 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:18:06.081947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:18:06.107643 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 14 13:18:06.280388 kernel: loop4: detected capacity change from 0 to 28272 Jan 14 13:18:06.289376 kernel: loop5: detected capacity change from 0 to 140992 Jan 14 13:18:06.305379 kernel: loop6: detected capacity change from 0 to 210664 Jan 14 13:18:06.314378 kernel: loop7: detected capacity change from 0 to 138184 Jan 14 13:18:06.324704 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:18:06.325346 (sd-merge)[1328]: Merged extensions into '/usr'. Jan 14 13:18:06.329908 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:18:06.329927 systemd[1]: Reloading... Jan 14 13:18:06.387582 zram_generator::config[1353]: No configuration found. Jan 14 13:18:06.641377 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:18:06.655522 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:18:06.655605 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:18:06.692397 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:18:06.697381 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:18:06.705390 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:18:06.713374 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:18:06.715441 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:18:06.739734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:18:06.904543 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1407) Jan 14 13:18:06.903039 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:18:06.903358 systemd[1]: Reloading finished in 572 ms. Jan 14 13:18:06.965974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:18:06.975421 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:18:07.046464 systemd[1]: Starting ensure-sysext.service... Jan 14 13:18:07.052773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:18:07.065763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:18:07.073818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:07.116145 systemd[1]: Reloading requested from client PID 1490 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:18:07.116171 systemd[1]: Reloading... Jan 14 13:18:07.177016 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:18:07.178059 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:18:07.179488 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:18:07.180026 systemd-tmpfiles[1502]: ACLs are not supported, ignoring. Jan 14 13:18:07.182883 systemd-tmpfiles[1502]: ACLs are not supported, ignoring. Jan 14 13:18:07.236840 systemd-tmpfiles[1502]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:18:07.236857 systemd-tmpfiles[1502]: Skipping /boot Jan 14 13:18:07.244377 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:18:07.290157 systemd-tmpfiles[1502]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:18:07.290172 systemd-tmpfiles[1502]: Skipping /boot Jan 14 13:18:07.299478 zram_generator::config[1549]: No configuration found. Jan 14 13:18:07.440896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:18:07.532928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:18:07.536812 systemd[1]: Reloading finished in 419 ms. Jan 14 13:18:07.556952 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:18:07.561711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:07.584009 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:07.592798 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:18:07.616748 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:18:07.620536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:18:07.624702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:18:07.637004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:18:07.648703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:18:07.651931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:18:07.655119 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:18:07.668981 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:18:07.675903 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:18:07.682453 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:18:07.688789 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:18:07.691863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:18:07.692173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:07.695634 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:07.702699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:18:07.708665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:07.713568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:18:07.713766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:18:07.718042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:18:07.718235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:18:07.727019 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:18:07.727266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:18:07.743180 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:07.743823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:18:07.749701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:18:07.760644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:18:07.775105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:18:07.780851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:18:07.781032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:07.782056 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:18:07.788032 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:18:07.799742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:18:07.799932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:18:07.804411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:18:07.808733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:18:07.808926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:18:07.822916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:18:07.839676 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:18:07.839922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:18:07.844450 augenrules[1659]: No rules Jan 14 13:18:07.855710 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:18:07.855951 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:18:07.860825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:18:07.880127 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:18:07.883659 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:18:07.883953 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:18:07.886497 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:18:07.899501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:07.906679 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:18:07.910263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:18:07.915528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:18:07.929953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:18:07.943683 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:18:07.960466 lvm[1667]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:18:07.959484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:18:07.968769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:18:07.970408 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:18:07.977552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:18:07.981174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:18:07.981566 augenrules[1669]: /sbin/augenrules: No change Jan 14 13:18:07.982095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:18:07.987627 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:18:07.987944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:18:07.990650 systemd-resolved[1627]: Positive Trust Anchors: Jan 14 13:18:07.990897 systemd-resolved[1627]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:18:07.990978 systemd-resolved[1627]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:18:07.993533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:18:07.993718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:18:07.998038 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:18:07.998320 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:18:08.004124 systemd[1]: Finished ensure-sysext.service. Jan 14 13:18:08.006632 augenrules[1693]: No rules Jan 14 13:18:08.007176 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:18:08.007403 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:18:08.014518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:18:08.014628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:18:08.049309 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:18:08.053988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:18:08.061585 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:18:08.071222 lvm[1703]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:18:08.072560 systemd-resolved[1627]: Using system hostname 'ci-4152.2.0-a-42c09c22a8'. Jan 14 13:18:08.074799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:18:08.078607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:18:08.089094 systemd-networkd[1501]: lo: Link UP Jan 14 13:18:08.089104 systemd-networkd[1501]: lo: Gained carrier Jan 14 13:18:08.092204 systemd-networkd[1501]: Enumeration completed Jan 14 13:18:08.092695 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:08.092708 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:18:08.093117 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:18:08.097348 systemd[1]: Reached target network.target - Network. Jan 14 13:18:08.106577 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:18:08.110471 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:18:08.147437 kernel: mlx5_core 5438:00:02.0 enP21560s1: Link up Jan 14 13:18:08.166414 kernel: hv_netvsc 7c1e5266-731b-7c1e-5266-731b7c1e5266 eth0: Data path switched to VF: enP21560s1 Jan 14 13:18:08.167694 systemd-networkd[1501]: enP21560s1: Link UP Jan 14 13:18:08.168217 systemd-networkd[1501]: eth0: Link UP Jan 14 13:18:08.168223 systemd-networkd[1501]: eth0: Gained carrier Jan 14 13:18:08.168251 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:08.172742 systemd-networkd[1501]: enP21560s1: Gained carrier Jan 14 13:18:08.210443 systemd-networkd[1501]: eth0: DHCPv4 address 10.200.4.13/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:18:08.646015 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:18:08.650541 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:18:09.903542 systemd-networkd[1501]: enP21560s1: Gained IPv6LL Jan 14 13:18:10.223640 systemd-networkd[1501]: eth0: Gained IPv6LL Jan 14 13:18:10.226862 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:18:10.233409 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:18:10.661098 ldconfig[1292]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:18:10.672145 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:18:10.680679 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:18:10.706985 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:18:10.710741 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:18:10.713752 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:18:10.719993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:18:10.723238 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:18:10.726115 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:18:10.729655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:18:10.733200 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:18:10.733248 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:18:10.735750 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:18:10.738955 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:18:10.743397 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:18:10.750364 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:18:10.753951 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:18:10.756870 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:18:10.759379 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:18:10.761891 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:18:10.761933 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:18:10.782490 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:18:10.789504 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:18:10.798553 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:18:10.804546 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:18:10.814482 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:18:10.819642 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:18:10.823632 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:18:10.823681 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:18:10.830523 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:18:10.834172 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:18:10.839442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:18:10.842418 KVP[1720]: KVP starting; pid is:1720 Jan 14 13:18:10.851531 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:18:10.860820 KVP[1720]: KVP LIC Version: 3.1 Jan 14 13:18:10.861467 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:18:10.862740 (chronyd)[1713]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:18:10.863653 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:18:10.867781 jq[1717]: false Jan 14 13:18:10.870985 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:18:10.882535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:18:10.887576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:18:10.896791 chronyd[1732]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:18:10.904496 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:18:10.908194 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:18:10.909443 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:18:10.914517 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:18:10.918479 extend-filesystems[1718]: Found loop4 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found loop5 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found loop6 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found loop7 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda1 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda2 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda3 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found usr Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda4 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda6 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda7 Jan 14 13:18:10.918479 extend-filesystems[1718]: Found sda9 Jan 14 13:18:10.918479 extend-filesystems[1718]: Checking size of /dev/sda9 Jan 14 13:18:10.922576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:18:10.927620 chronyd[1732]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:18:10.927830 chronyd[1732]: Loaded seccomp filter (level 2) Jan 14 13:18:10.948420 jq[1738]: true Jan 14 13:18:10.976876 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:18:10.986828 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:18:10.987577 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:18:10.991319 dbus-daemon[1716]: [system] SELinux support is enabled Jan 14 13:18:10.994027 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:18:11.003752 extend-filesystems[1718]: Old size kept for /dev/sda9 Jan 14 13:18:11.018034 extend-filesystems[1718]: Found sr0 Jan 14 13:18:11.007269 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:18:11.007522 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:18:11.012035 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:18:11.020648 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:18:11.020863 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:18:11.028085 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:18:11.028327 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:18:11.067040 systemd-logind[1734]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:18:11.070441 systemd-logind[1734]: New seat seat0. Jan 14 13:18:11.077598 (ntainerd)[1761]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:18:11.082854 update_engine[1736]: I20250114 13:18:11.082731 1736 main.cc:92] Flatcar Update Engine starting Jan 14 13:18:11.087777 update_engine[1736]: I20250114 13:18:11.087728 1736 update_check_scheduler.cc:74] Next update check in 9m17s Jan 14 13:18:11.088963 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:18:11.089021 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:18:11.095953 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:18:11.095994 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:18:11.102576 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:18:11.107836 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:18:11.116554 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:18:11.125741 jq[1759]: true Jan 14 13:18:11.149572 tar[1756]: linux-amd64/helm Jan 14 13:18:11.178847 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1768) Jan 14 13:18:11.234901 coreos-metadata[1715]: Jan 14 13:18:11.234 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:18:11.241378 coreos-metadata[1715]: Jan 14 13:18:11.241 INFO Fetch successful Jan 14 13:18:11.241634 coreos-metadata[1715]: Jan 14 13:18:11.241 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:18:11.246462 coreos-metadata[1715]: Jan 14 13:18:11.246 INFO Fetch successful Jan 14 13:18:11.249735 coreos-metadata[1715]: Jan 14 13:18:11.249 INFO Fetching http://168.63.129.16/machine/541f5483-1e1c-4797-bfa1-8f6607771fe7/c97ebe5a%2De91f%2D4d2a%2D8d39%2Deb0bbbe93179.%5Fci%2D4152.2.0%2Da%2D42c09c22a8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:18:11.252408 coreos-metadata[1715]: Jan 14 13:18:11.252 INFO Fetch successful Jan 14 13:18:11.252681 coreos-metadata[1715]: Jan 14 13:18:11.252 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:18:11.273131 coreos-metadata[1715]: Jan 14 13:18:11.273 INFO Fetch successful Jan 14 13:18:11.386933 bash[1838]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:18:11.389251 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:18:11.393991 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:18:11.407036 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:18:11.412249 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:18:11.413750 locksmithd[1781]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:18:11.901727 tar[1756]: linux-amd64/LICENSE Jan 14 13:18:11.901960 tar[1756]: linux-amd64/README.md Jan 14 13:18:11.916843 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:18:12.032401 sshd_keygen[1755]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:18:12.062487 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:18:12.074778 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:18:12.088599 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:18:12.100146 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:18:12.102240 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:18:12.113492 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:18:12.132671 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:18:12.141708 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:18:12.149540 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:18:12.153895 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:18:12.166603 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:18:12.451632 containerd[1761]: time="2025-01-14T13:18:12.451477800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:18:12.483505 containerd[1761]: time="2025-01-14T13:18:12.483389000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:12.485190 containerd[1761]: time="2025-01-14T13:18:12.485141500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:12.485190 containerd[1761]: time="2025-01-14T13:18:12.485178200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:18:12.485337 containerd[1761]: time="2025-01-14T13:18:12.485200800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:18:12.485416 containerd[1761]: time="2025-01-14T13:18:12.485390200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:18:12.485471 containerd[1761]: time="2025-01-14T13:18:12.485435700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.485520700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.485551400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.485766900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.485785600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.485805200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.485818800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.485899800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.486133300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.486266000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.486283400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:18:12.486583 containerd[1761]: time="2025-01-14T13:18:12.486390500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:18:12.486928 containerd[1761]: time="2025-01-14T13:18:12.486451800Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.502133100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.502208300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.502232300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.502254800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.502275000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.502493200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.502851000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.503004000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.503025900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.503045700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.503065100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.503081800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503089 containerd[1761]: time="2025-01-14T13:18:12.503099800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503133900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503154800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503169300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503184500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503199800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503224900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503244800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503262700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503281500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503299400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503318700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503335800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503383900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.503635 containerd[1761]: time="2025-01-14T13:18:12.503405200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503426900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503444300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503464000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503481200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503500800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503542900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503564000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503579200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503631200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503655300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503671200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503688000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:18:12.504105 containerd[1761]: time="2025-01-14T13:18:12.503703900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504537 containerd[1761]: time="2025-01-14T13:18:12.503720700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:18:12.504537 containerd[1761]: time="2025-01-14T13:18:12.503735500Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:18:12.504537 containerd[1761]: time="2025-01-14T13:18:12.503751200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:18:12.504649 containerd[1761]: time="2025-01-14T13:18:12.504121500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:18:12.504649 containerd[1761]: time="2025-01-14T13:18:12.504187300Z" level=info msg="Connect containerd service" Jan 14 13:18:12.504649 containerd[1761]: time="2025-01-14T13:18:12.504239000Z" level=info msg="using legacy CRI server" Jan 14 13:18:12.504649 containerd[1761]: time="2025-01-14T13:18:12.504249900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:18:12.504649 containerd[1761]: time="2025-01-14T13:18:12.504542800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:18:12.505774 containerd[1761]: time="2025-01-14T13:18:12.505264600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:18:12.505774 containerd[1761]: time="2025-01-14T13:18:12.505617700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:18:12.505774 containerd[1761]: time="2025-01-14T13:18:12.505668600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:18:12.505774 containerd[1761]: time="2025-01-14T13:18:12.505734200Z" level=info msg="Start subscribing containerd event" Jan 14 13:18:12.505963 containerd[1761]: time="2025-01-14T13:18:12.505858600Z" level=info msg="Start recovering state" Jan 14 13:18:12.505963 containerd[1761]: time="2025-01-14T13:18:12.505929800Z" level=info msg="Start event monitor" Jan 14 13:18:12.505963 containerd[1761]: time="2025-01-14T13:18:12.505948800Z" level=info msg="Start snapshots syncer" Jan 14 13:18:12.505963 containerd[1761]: time="2025-01-14T13:18:12.505962300Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:18:12.506102 containerd[1761]: time="2025-01-14T13:18:12.505973300Z" level=info msg="Start streaming server" Jan 14 13:18:12.506149 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:18:12.513131 containerd[1761]: time="2025-01-14T13:18:12.512370700Z" level=info msg="containerd successfully booted in 0.062316s" Jan 14 13:18:12.581286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:18:12.585601 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:18:12.589991 systemd[1]: Startup finished in 848ms (firmware) + 32.349s (loader) + 1.053s (kernel) + 13.517s (initrd) + 12.218s (userspace) = 59.986s. Jan 14 13:18:12.595204 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:18:12.965026 login[1887]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 14 13:18:12.965861 login[1885]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:18:12.980082 systemd-logind[1734]: New session 2 of user core. Jan 14 13:18:12.983400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:18:12.990715 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:18:13.008389 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:18:13.016735 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:18:13.028830 (systemd)[1909]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:18:13.317158 systemd[1909]: Queued start job for default target default.target. Jan 14 13:18:13.323150 systemd[1909]: Created slice app.slice - User Application Slice. Jan 14 13:18:13.323188 systemd[1909]: Reached target paths.target - Paths. Jan 14 13:18:13.323206 systemd[1909]: Reached target timers.target - Timers. Jan 14 13:18:13.327485 systemd[1909]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:18:13.341807 systemd[1909]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:18:13.341964 systemd[1909]: Reached target sockets.target - Sockets. Jan 14 13:18:13.341986 systemd[1909]: Reached target basic.target - Basic System. Jan 14 13:18:13.342045 systemd[1909]: Reached target default.target - Main User Target. Jan 14 13:18:13.342083 systemd[1909]: Startup finished in 303ms. Jan 14 13:18:13.342274 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:18:13.345553 kubelet[1898]: E0114 13:18:13.345512 1898 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:18:13.347530 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:18:13.347859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:18:13.348037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:18:13.966809 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:18:13.972112 systemd-logind[1734]: New session 1 of user core. Jan 14 13:18:13.982564 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:18:14.337009 waagent[1888]: 2025-01-14T13:18:14.336836Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.337309Z INFO Daemon Daemon OS: flatcar 4152.2.0 Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.338315Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.339472Z INFO Daemon Daemon Run daemon Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.340382Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.0' Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.341238Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.342360Z INFO Daemon Daemon Activate resource disk Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.343124Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.348950Z INFO Daemon Daemon Found device: None Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.349878Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.350843Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.352197Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:18:14.376331 waagent[1888]: 2025-01-14T13:18:14.352793Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:18:14.379500 waagent[1888]: 2025-01-14T13:18:14.379411Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:18:14.387709 waagent[1888]: 2025-01-14T13:18:14.387639Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:18:14.392544 waagent[1888]: 2025-01-14T13:18:14.392428Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:18:14.397078 waagent[1888]: 2025-01-14T13:18:14.392618Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:18:14.496722 waagent[1888]: 2025-01-14T13:18:14.496611Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:18:14.510617 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:18:14.513171 waagent[1888]: 2025-01-14T13:18:14.513092Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:18:14.528830 waagent[1888]: 2025-01-14T13:18:14.513517Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:18:14.528830 waagent[1888]: 2025-01-14T13:18:14.514618Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:18:14.528830 waagent[1888]: 2025-01-14T13:18:14.515496Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:18:14.528830 waagent[1888]: 2025-01-14T13:18:14.516079Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:18:14.528830 waagent[1888]: 2025-01-14T13:18:14.516875Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:18:14.577229 waagent[1888]: 2025-01-14T13:18:14.577148Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:18:14.585344 waagent[1888]: 2025-01-14T13:18:14.577751Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:18:14.585344 waagent[1888]: 2025-01-14T13:18:14.578561Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:18:14.725456 waagent[1888]: 2025-01-14T13:18:14.725272Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:18:14.729115 waagent[1888]: 2025-01-14T13:18:14.729030Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:18:14.735584 waagent[1888]: 2025-01-14T13:18:14.735525Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:18:14.751726 waagent[1888]: 2025-01-14T13:18:14.751660Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 14 13:18:14.771660 waagent[1888]: 2025-01-14T13:18:14.752418Z INFO Daemon Jan 14 13:18:14.771660 waagent[1888]: 2025-01-14T13:18:14.753662Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 04636e32-9fbb-446d-97dd-5ae3001faada eTag: 18324776585494073947 source: Fabric] Jan 14 13:18:14.771660 waagent[1888]: 2025-01-14T13:18:14.754440Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:18:14.771660 waagent[1888]: 2025-01-14T13:18:14.755123Z INFO Daemon Jan 14 13:18:14.771660 waagent[1888]: 2025-01-14T13:18:14.756029Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:18:14.771660 waagent[1888]: 2025-01-14T13:18:14.760749Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:18:14.833607 waagent[1888]: 2025-01-14T13:18:14.833513Z INFO Daemon Downloaded certificate {'thumbprint': 'B6FE83219AA37BA2B82AD022957C0D81BC926029', 'hasPrivateKey': True} Jan 14 13:18:14.840421 waagent[1888]: 2025-01-14T13:18:14.834338Z INFO Daemon Fetch goal state completed Jan 14 13:18:14.846974 waagent[1888]: 2025-01-14T13:18:14.846920Z INFO Daemon Daemon Starting provisioning Jan 14 13:18:14.854395 waagent[1888]: 2025-01-14T13:18:14.847190Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:18:14.854395 waagent[1888]: 2025-01-14T13:18:14.848328Z INFO Daemon Daemon Set hostname [ci-4152.2.0-a-42c09c22a8] Jan 14 13:18:14.866373 waagent[1888]: 2025-01-14T13:18:14.866276Z INFO Daemon Daemon Publish hostname [ci-4152.2.0-a-42c09c22a8] Jan 14 13:18:14.874649 waagent[1888]: 2025-01-14T13:18:14.866779Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:18:14.874649 waagent[1888]: 2025-01-14T13:18:14.867680Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:18:14.925134 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:18:14.925146 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:18:14.925206 systemd-networkd[1501]: eth0: DHCP lease lost Jan 14 13:18:14.927216 waagent[1888]: 2025-01-14T13:18:14.927119Z INFO Daemon Daemon Create user account if not exists Jan 14 13:18:14.930739 waagent[1888]: 2025-01-14T13:18:14.930660Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:18:14.945243 waagent[1888]: 2025-01-14T13:18:14.930895Z INFO Daemon Daemon Configure sudoer Jan 14 13:18:14.945243 waagent[1888]: 2025-01-14T13:18:14.932135Z INFO Daemon Daemon Configure sshd Jan 14 13:18:14.945243 waagent[1888]: 2025-01-14T13:18:14.933514Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:18:14.945243 waagent[1888]: 2025-01-14T13:18:14.934218Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:18:14.932804 systemd-networkd[1501]: eth0: DHCPv6 lease lost Jan 14 13:18:14.978456 systemd-networkd[1501]: eth0: DHCPv4 address 10.200.4.13/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:18:16.029261 waagent[1888]: 2025-01-14T13:18:16.029153Z INFO Daemon Daemon Provisioning complete Jan 14 13:18:16.041507 waagent[1888]: 2025-01-14T13:18:16.041444Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:18:16.049079 waagent[1888]: 2025-01-14T13:18:16.041766Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:18:16.049079 waagent[1888]: 2025-01-14T13:18:16.043100Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:18:16.172833 waagent[1962]: 2025-01-14T13:18:16.172725Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:18:16.173265 waagent[1962]: 2025-01-14T13:18:16.172907Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.0 Jan 14 13:18:16.173265 waagent[1962]: 2025-01-14T13:18:16.172990Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:18:16.645037 waagent[1962]: 2025-01-14T13:18:16.644917Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:18:16.645331 waagent[1962]: 2025-01-14T13:18:16.645263Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:18:16.645477 waagent[1962]: 2025-01-14T13:18:16.645421Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:18:16.653512 waagent[1962]: 2025-01-14T13:18:16.653438Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:18:16.659936 waagent[1962]: 2025-01-14T13:18:16.659871Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 14 13:18:16.660434 waagent[1962]: 2025-01-14T13:18:16.660377Z INFO ExtHandler Jan 14 13:18:16.660520 waagent[1962]: 2025-01-14T13:18:16.660474Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3d3b89bd-1c90-4503-988f-a875f92bd5d2 eTag: 18324776585494073947 source: Fabric] Jan 14 13:18:16.660845 waagent[1962]: 2025-01-14T13:18:16.660797Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:18:16.680083 waagent[1962]: 2025-01-14T13:18:16.679931Z INFO ExtHandler Jan 14 13:18:16.680247 waagent[1962]: 2025-01-14T13:18:16.680185Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:18:16.684933 waagent[1962]: 2025-01-14T13:18:16.684872Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:18:16.843291 waagent[1962]: 2025-01-14T13:18:16.843157Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B6FE83219AA37BA2B82AD022957C0D81BC926029', 'hasPrivateKey': True} Jan 14 13:18:16.844045 waagent[1962]: 2025-01-14T13:18:16.843962Z INFO ExtHandler Fetch goal state completed Jan 14 13:18:16.857905 waagent[1962]: 2025-01-14T13:18:16.857833Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1962 Jan 14 13:18:16.858066 waagent[1962]: 2025-01-14T13:18:16.858013Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:18:16.859699 waagent[1962]: 2025-01-14T13:18:16.859629Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:18:16.860075 waagent[1962]: 2025-01-14T13:18:16.860023Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:18:17.062430 waagent[1962]: 2025-01-14T13:18:17.062282Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:18:17.062634 waagent[1962]: 2025-01-14T13:18:17.062573Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:18:17.070385 waagent[1962]: 2025-01-14T13:18:17.070039Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:18:17.077026 systemd[1]: Reloading requested from client PID 1977 ('systemctl') (unit waagent.service)... Jan 14 13:18:17.077044 systemd[1]: Reloading... Jan 14 13:18:17.153386 zram_generator::config[2007]: No configuration found. Jan 14 13:18:17.286372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:18:17.365656 systemd[1]: Reloading finished in 288 ms. Jan 14 13:18:17.389400 waagent[1962]: 2025-01-14T13:18:17.389061Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:18:17.398818 systemd[1]: Reloading requested from client PID 2068 ('systemctl') (unit waagent.service)... Jan 14 13:18:17.398935 systemd[1]: Reloading... Jan 14 13:18:17.491382 zram_generator::config[2102]: No configuration found. Jan 14 13:18:17.603591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:18:17.682320 systemd[1]: Reloading finished in 283 ms. Jan 14 13:18:17.710101 waagent[1962]: 2025-01-14T13:18:17.708020Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:18:17.710101 waagent[1962]: 2025-01-14T13:18:17.708242Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:18:18.149576 waagent[1962]: 2025-01-14T13:18:18.149400Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:18:18.150207 waagent[1962]: 2025-01-14T13:18:18.150130Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:18:18.151125 waagent[1962]: 2025-01-14T13:18:18.151054Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:18:18.151304 waagent[1962]: 2025-01-14T13:18:18.151251Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:18:18.151728 waagent[1962]: 2025-01-14T13:18:18.151664Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:18:18.151977 waagent[1962]: 2025-01-14T13:18:18.151921Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:18:18.152409 waagent[1962]: 2025-01-14T13:18:18.152255Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:18:18.152754 waagent[1962]: 2025-01-14T13:18:18.152674Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:18:18.153043 waagent[1962]: 2025-01-14T13:18:18.152987Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:18:18.153299 waagent[1962]: 2025-01-14T13:18:18.153244Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:18:18.153464 waagent[1962]: 2025-01-14T13:18:18.153412Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:18:18.153898 waagent[1962]: 2025-01-14T13:18:18.153830Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:18:18.154005 waagent[1962]: 2025-01-14T13:18:18.153943Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:18:18.154005 waagent[1962]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:18:18.154005 waagent[1962]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:18:18.154005 waagent[1962]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:18:18.154005 waagent[1962]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:18:18.154005 waagent[1962]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:18:18.154005 waagent[1962]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:18:18.154198 waagent[1962]: 2025-01-14T13:18:18.154003Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:18:18.154704 waagent[1962]: 2025-01-14T13:18:18.154648Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:18:18.155321 waagent[1962]: 2025-01-14T13:18:18.155251Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:18:18.155476 waagent[1962]: 2025-01-14T13:18:18.155368Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:18:18.155855 waagent[1962]: 2025-01-14T13:18:18.155811Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:18:18.161731 waagent[1962]: 2025-01-14T13:18:18.161635Z INFO ExtHandler ExtHandler Jan 14 13:18:18.161844 waagent[1962]: 2025-01-14T13:18:18.161797Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6b6db247-57a5-42b6-b741-fdf88752981b correlation 26343d3f-e849-46a5-84ae-09082dca4050 created: 2025-01-14T13:17:00.532074Z] Jan 14 13:18:18.162708 waagent[1962]: 2025-01-14T13:18:18.162666Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:18:18.163976 waagent[1962]: 2025-01-14T13:18:18.163935Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 14 13:18:18.197651 waagent[1962]: 2025-01-14T13:18:18.197480Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 78650592-AF08-4967-AD26-8F98BC9083FC;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:18:18.197916 waagent[1962]: 2025-01-14T13:18:18.197858Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:18:18.197916 waagent[1962]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:18:18.197916 waagent[1962]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:18:18.197916 waagent[1962]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:66:73:1b brd ff:ff:ff:ff:ff:ff Jan 14 13:18:18.197916 waagent[1962]: 3: enP21560s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:66:73:1b brd ff:ff:ff:ff:ff:ff\ altname enP21560p0s2 Jan 14 13:18:18.197916 waagent[1962]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:18:18.197916 waagent[1962]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:18:18.197916 waagent[1962]: 2: eth0 inet 10.200.4.13/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:18:18.197916 waagent[1962]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:18:18.197916 waagent[1962]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:18:18.197916 waagent[1962]: 2: eth0 inet6 fe80::7e1e:52ff:fe66:731b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:18:18.197916 waagent[1962]: 3: enP21560s1 inet6 fe80::7e1e:52ff:fe66:731b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:18:18.266041 waagent[1962]: 2025-01-14T13:18:18.265973Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:18:18.266041 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:18:18.266041 waagent[1962]: pkts bytes target prot opt in out source destination Jan 14 13:18:18.266041 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:18:18.266041 waagent[1962]: pkts bytes target prot opt in out source destination Jan 14 13:18:18.266041 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:18:18.266041 waagent[1962]: pkts bytes target prot opt in out source destination Jan 14 13:18:18.266041 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:18:18.266041 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:18:18.266041 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:18:18.270034 waagent[1962]: 2025-01-14T13:18:18.269950Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:18:18.270034 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:18:18.270034 waagent[1962]: pkts bytes target prot opt in out source destination Jan 14 13:18:18.270034 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:18:18.270034 waagent[1962]: pkts bytes target prot opt in out source destination Jan 14 13:18:18.270034 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:18:18.270034 waagent[1962]: pkts bytes target prot opt in out source destination Jan 14 13:18:18.270034 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:18:18.270034 waagent[1962]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:18:18.270034 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:18:18.270592 waagent[1962]: 2025-01-14T13:18:18.270547Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:18:23.598885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:18:23.607583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:18:23.705874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:18:23.716694 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:18:24.310850 kubelet[2198]: E0114 13:18:24.310785 2198 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:18:24.314571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:18:24.314782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:18:34.565328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:18:34.571632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:18:34.669981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:18:34.683726 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:18:34.723571 kubelet[2214]: E0114 13:18:34.723513 2214 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:18:34.725145 chronyd[1732]: Selected source PHC0 Jan 14 13:18:34.725548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:18:34.725711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:18:37.989442 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:18:37.999702 systemd[1]: Started sshd@0-10.200.4.13:22-10.200.16.10:45092.service - OpenSSH per-connection server daemon (10.200.16.10:45092). Jan 14 13:18:38.923948 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 45092 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:18:38.925626 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:18:38.930787 systemd-logind[1734]: New session 3 of user core. Jan 14 13:18:38.938528 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:18:39.456662 systemd[1]: Started sshd@1-10.200.4.13:22-10.200.16.10:45104.service - OpenSSH per-connection server daemon (10.200.16.10:45104). Jan 14 13:18:40.066639 sshd[2228]: Accepted publickey for core from 10.200.16.10 port 45104 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:18:40.068040 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:18:40.072580 systemd-logind[1734]: New session 4 of user core. Jan 14 13:18:40.078552 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:18:40.496296 sshd[2230]: Connection closed by 10.200.16.10 port 45104 Jan 14 13:18:40.497143 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Jan 14 13:18:40.500386 systemd[1]: sshd@1-10.200.4.13:22-10.200.16.10:45104.service: Deactivated successfully. Jan 14 13:18:40.502629 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:18:40.504195 systemd-logind[1734]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:18:40.505166 systemd-logind[1734]: Removed session 4. Jan 14 13:18:40.607368 systemd[1]: Started sshd@2-10.200.4.13:22-10.200.16.10:45116.service - OpenSSH per-connection server daemon (10.200.16.10:45116). Jan 14 13:18:41.218300 sshd[2235]: Accepted publickey for core from 10.200.16.10 port 45116 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:18:41.220014 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:18:41.224775 systemd-logind[1734]: New session 5 of user core. Jan 14 13:18:41.230545 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:18:41.644447 sshd[2237]: Connection closed by 10.200.16.10 port 45116 Jan 14 13:18:41.645302 sshd-session[2235]: pam_unix(sshd:session): session closed for user core Jan 14 13:18:41.648444 systemd[1]: sshd@2-10.200.4.13:22-10.200.16.10:45116.service: Deactivated successfully. Jan 14 13:18:41.650783 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:18:41.652489 systemd-logind[1734]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:18:41.653428 systemd-logind[1734]: Removed session 5. Jan 14 13:18:41.752262 systemd[1]: Started sshd@3-10.200.4.13:22-10.200.16.10:45118.service - OpenSSH per-connection server daemon (10.200.16.10:45118). Jan 14 13:18:42.367082 sshd[2242]: Accepted publickey for core from 10.200.16.10 port 45118 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:18:42.368781 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:18:42.373475 systemd-logind[1734]: New session 6 of user core. Jan 14 13:18:42.380513 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:18:42.809263 sshd[2244]: Connection closed by 10.200.16.10 port 45118 Jan 14 13:18:42.810393 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Jan 14 13:18:42.812962 systemd[1]: sshd@3-10.200.4.13:22-10.200.16.10:45118.service: Deactivated successfully. Jan 14 13:18:42.815058 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:18:42.816552 systemd-logind[1734]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:18:42.817519 systemd-logind[1734]: Removed session 6. Jan 14 13:18:42.921711 systemd[1]: Started sshd@4-10.200.4.13:22-10.200.16.10:45120.service - OpenSSH per-connection server daemon (10.200.16.10:45120). Jan 14 13:18:43.530493 sshd[2249]: Accepted publickey for core from 10.200.16.10 port 45120 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:18:43.532095 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:18:43.537495 systemd-logind[1734]: New session 7 of user core. Jan 14 13:18:43.547541 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:18:44.057026 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:18:44.057501 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:18:44.087799 sudo[2252]: pam_unix(sudo:session): session closed for user root Jan 14 13:18:44.185192 sshd[2251]: Connection closed by 10.200.16.10 port 45120 Jan 14 13:18:44.186264 sshd-session[2249]: pam_unix(sshd:session): session closed for user core Jan 14 13:18:44.189193 systemd[1]: sshd@4-10.200.4.13:22-10.200.16.10:45120.service: Deactivated successfully. Jan 14 13:18:44.191176 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:18:44.192683 systemd-logind[1734]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:18:44.193806 systemd-logind[1734]: Removed session 7. Jan 14 13:18:44.292446 systemd[1]: Started sshd@5-10.200.4.13:22-10.200.16.10:45134.service - OpenSSH per-connection server daemon (10.200.16.10:45134). Jan 14 13:18:44.795380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:18:44.801636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:18:44.902177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:18:44.904564 sshd[2257]: Accepted publickey for core from 10.200.16.10 port 45134 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:18:44.906262 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:18:44.909223 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:18:44.912549 systemd-logind[1734]: New session 8 of user core. Jan 14 13:18:44.914063 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:18:45.238608 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:18:45.238963 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:18:45.242422 sudo[2276]: pam_unix(sudo:session): session closed for user root Jan 14 13:18:45.247721 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:18:45.248077 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:18:45.260755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:18:45.490037 augenrules[2298]: No rules Jan 14 13:18:45.490999 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:18:45.491228 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:18:45.494008 sudo[2275]: pam_unix(sudo:session): session closed for user root Jan 14 13:18:45.505058 kubelet[2267]: E0114 13:18:45.504978 2267 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:18:45.507412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:18:45.507616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:18:45.596262 sshd[2272]: Connection closed by 10.200.16.10 port 45134 Jan 14 13:18:45.597048 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Jan 14 13:18:45.601392 systemd[1]: sshd@5-10.200.4.13:22-10.200.16.10:45134.service: Deactivated successfully. Jan 14 13:18:45.603248 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:18:45.603905 systemd-logind[1734]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:18:45.604824 systemd-logind[1734]: Removed session 8. Jan 14 13:18:45.703402 systemd[1]: Started sshd@6-10.200.4.13:22-10.200.16.10:45146.service - OpenSSH per-connection server daemon (10.200.16.10:45146). Jan 14 13:18:46.313435 sshd[2307]: Accepted publickey for core from 10.200.16.10 port 45146 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:18:46.315007 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:18:46.320290 systemd-logind[1734]: New session 9 of user core. Jan 14 13:18:46.326511 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:18:46.647472 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:18:46.647826 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:18:48.303767 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:18:48.303840 (dockerd)[2327]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:18:50.305712 dockerd[2327]: time="2025-01-14T13:18:50.305638011Z" level=info msg="Starting up" Jan 14 13:18:50.696416 dockerd[2327]: time="2025-01-14T13:18:50.696346650Z" level=info msg="Loading containers: start." Jan 14 13:18:50.940387 kernel: Initializing XFRM netlink socket Jan 14 13:18:51.059689 systemd-networkd[1501]: docker0: Link UP Jan 14 13:18:51.123807 dockerd[2327]: time="2025-01-14T13:18:51.123756746Z" level=info msg="Loading containers: done." Jan 14 13:18:51.182132 dockerd[2327]: time="2025-01-14T13:18:51.182073074Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:18:51.182329 dockerd[2327]: time="2025-01-14T13:18:51.182200778Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 14 13:18:51.182420 dockerd[2327]: time="2025-01-14T13:18:51.182342082Z" level=info msg="Daemon has completed initialization" Jan 14 13:18:51.240447 dockerd[2327]: time="2025-01-14T13:18:51.240259998Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:18:51.240851 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:18:53.318377 containerd[1761]: time="2025-01-14T13:18:53.318315633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 14 13:18:54.066678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945697738.mount: Deactivated successfully. Jan 14 13:18:54.745378 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:18:55.576073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:18:55.583612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:18:55.728668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:18:55.741843 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:18:56.248821 kubelet[2576]: E0114 13:18:56.248764 2576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:18:56.251376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:18:56.251588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:18:56.263172 containerd[1761]: time="2025-01-14T13:18:56.263122236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:56.265682 containerd[1761]: time="2025-01-14T13:18:56.265613314Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 14 13:18:56.270392 containerd[1761]: time="2025-01-14T13:18:56.270301461Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:56.276266 containerd[1761]: time="2025-01-14T13:18:56.276128144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:56.277481 containerd[1761]: time="2025-01-14T13:18:56.277217878Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.958329927s" Jan 14 13:18:56.277481 containerd[1761]: time="2025-01-14T13:18:56.277263579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 14 13:18:56.301010 containerd[1761]: time="2025-01-14T13:18:56.300969722Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 14 13:18:56.808501 update_engine[1736]: I20250114 13:18:56.808413 1736 update_attempter.cc:509] Updating boot flags... Jan 14 13:18:56.863479 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2604) Jan 14 13:18:56.998492 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2603) Jan 14 13:18:58.321022 containerd[1761]: time="2025-01-14T13:18:58.320967338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:58.325219 containerd[1761]: time="2025-01-14T13:18:58.325142068Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 14 13:18:58.329525 containerd[1761]: time="2025-01-14T13:18:58.329460404Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:58.337936 containerd[1761]: time="2025-01-14T13:18:58.337859067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:58.339030 containerd[1761]: time="2025-01-14T13:18:58.338881799Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.037801774s" Jan 14 13:18:58.339030 containerd[1761]: time="2025-01-14T13:18:58.338924601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 14 13:18:58.361448 containerd[1761]: time="2025-01-14T13:18:58.361398605Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 14 13:18:59.787304 containerd[1761]: time="2025-01-14T13:18:59.787245963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:59.790600 containerd[1761]: time="2025-01-14T13:18:59.790529257Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 14 13:18:59.794643 containerd[1761]: time="2025-01-14T13:18:59.794575572Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:59.802657 containerd[1761]: time="2025-01-14T13:18:59.802039486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:18:59.803460 containerd[1761]: time="2025-01-14T13:18:59.803019214Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.441577207s" Jan 14 13:18:59.803460 containerd[1761]: time="2025-01-14T13:18:59.803059415Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 14 13:18:59.825372 containerd[1761]: time="2025-01-14T13:18:59.825320051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 14 13:19:01.304529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966538991.mount: Deactivated successfully. Jan 14 13:19:01.786004 containerd[1761]: time="2025-01-14T13:19:01.785949302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:01.789541 containerd[1761]: time="2025-01-14T13:19:01.789474503Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 14 13:19:01.792194 containerd[1761]: time="2025-01-14T13:19:01.792135979Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:01.795541 containerd[1761]: time="2025-01-14T13:19:01.795487275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:01.796101 containerd[1761]: time="2025-01-14T13:19:01.796065291Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.970662838s" Jan 14 13:19:01.796192 containerd[1761]: time="2025-01-14T13:19:01.796108092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 14 13:19:01.819410 containerd[1761]: time="2025-01-14T13:19:01.819362557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 13:19:02.516006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123158578.mount: Deactivated successfully. Jan 14 13:19:03.765778 containerd[1761]: time="2025-01-14T13:19:03.765721500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:03.768758 containerd[1761]: time="2025-01-14T13:19:03.768688485Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 14 13:19:03.774151 containerd[1761]: time="2025-01-14T13:19:03.774092239Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:03.780046 containerd[1761]: time="2025-01-14T13:19:03.779986808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:03.781055 containerd[1761]: time="2025-01-14T13:19:03.781014337Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.961612179s" Jan 14 13:19:03.781137 containerd[1761]: time="2025-01-14T13:19:03.781056738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 14 13:19:03.804135 containerd[1761]: time="2025-01-14T13:19:03.804098297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 13:19:04.435952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843212166.mount: Deactivated successfully. Jan 14 13:19:04.463238 containerd[1761]: time="2025-01-14T13:19:04.463171039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:04.465430 containerd[1761]: time="2025-01-14T13:19:04.465366501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 14 13:19:04.469524 containerd[1761]: time="2025-01-14T13:19:04.469468919Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:04.474307 containerd[1761]: time="2025-01-14T13:19:04.474252755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:04.475127 containerd[1761]: time="2025-01-14T13:19:04.474968176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 670.830278ms" Jan 14 13:19:04.475127 containerd[1761]: time="2025-01-14T13:19:04.475004377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 14 13:19:04.497556 containerd[1761]: time="2025-01-14T13:19:04.497514020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 14 13:19:05.173519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475430404.mount: Deactivated successfully. Jan 14 13:19:06.326119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:19:06.332046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:06.426040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:06.434688 (kubelet)[2836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:19:07.140485 kubelet[2836]: E0114 13:19:07.140415 2836 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:19:07.142501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:19:07.142675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:19:08.265809 containerd[1761]: time="2025-01-14T13:19:08.265749066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:08.268867 containerd[1761]: time="2025-01-14T13:19:08.268807756Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 14 13:19:08.271877 containerd[1761]: time="2025-01-14T13:19:08.271819545Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:08.276801 containerd[1761]: time="2025-01-14T13:19:08.276744690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:19:08.277832 containerd[1761]: time="2025-01-14T13:19:08.277796321Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.7802467s" Jan 14 13:19:08.278812 containerd[1761]: time="2025-01-14T13:19:08.277948626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 14 13:19:11.245464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:11.258658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:11.289255 systemd[1]: Reloading requested from client PID 2911 ('systemctl') (unit session-9.scope)... Jan 14 13:19:11.289272 systemd[1]: Reloading... Jan 14 13:19:11.398381 zram_generator::config[2954]: No configuration found. Jan 14 13:19:11.526577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:19:11.621788 systemd[1]: Reloading finished in 332 ms. Jan 14 13:19:11.695861 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:19:11.695995 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:19:11.696341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:11.702823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:17.237528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:17.247707 (kubelet)[3018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:19:17.287108 kubelet[3018]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:19:17.287108 kubelet[3018]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:19:17.287108 kubelet[3018]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:19:17.288510 kubelet[3018]: I0114 13:19:17.288463 3018 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:19:18.144421 kubelet[3018]: I0114 13:19:18.144377 3018 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 13:19:18.144421 kubelet[3018]: I0114 13:19:18.144410 3018 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:19:18.144743 kubelet[3018]: I0114 13:19:18.144721 3018 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 13:19:18.162812 kubelet[3018]: I0114 13:19:18.162498 3018 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:19:18.168188 kubelet[3018]: E0114 13:19:18.168157 3018 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.178926 kubelet[3018]: I0114 13:19:18.178895 3018 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:19:18.180766 kubelet[3018]: I0114 13:19:18.180713 3018 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:19:18.180985 kubelet[3018]: I0114 13:19:18.180765 3018 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.0-a-42c09c22a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:19:18.181137 kubelet[3018]: I0114 13:19:18.181005 3018 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:19:18.181137 kubelet[3018]: I0114 13:19:18.181019 3018 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:19:18.181217 kubelet[3018]: I0114 13:19:18.181172 3018 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:19:18.181969 kubelet[3018]: I0114 13:19:18.181946 3018 kubelet.go:400] "Attempting to sync node with API server" Jan 14 13:19:18.181969 kubelet[3018]: I0114 13:19:18.181971 3018 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:19:18.182102 kubelet[3018]: I0114 13:19:18.181998 3018 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:19:18.182102 kubelet[3018]: I0114 13:19:18.182018 3018 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:19:18.188082 kubelet[3018]: W0114 13:19:18.187339 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.188082 kubelet[3018]: E0114 13:19:18.187426 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.188082 kubelet[3018]: W0114 13:19:18.187506 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.188082 kubelet[3018]: E0114 13:19:18.187548 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.188508 kubelet[3018]: I0114 13:19:18.188478 3018 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:19:18.192143 kubelet[3018]: I0114 13:19:18.190730 3018 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:19:18.192143 kubelet[3018]: W0114 13:19:18.190797 3018 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:19:18.192143 kubelet[3018]: I0114 13:19:18.191753 3018 server.go:1264] "Started kubelet" Jan 14 13:19:18.194078 kubelet[3018]: I0114 13:19:18.194060 3018 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:19:18.197290 kubelet[3018]: I0114 13:19:18.196975 3018 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:19:18.198193 kubelet[3018]: I0114 13:19:18.198167 3018 server.go:455] "Adding debug handlers to kubelet server" Jan 14 13:19:18.199416 kubelet[3018]: I0114 13:19:18.199304 3018 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:19:18.199605 kubelet[3018]: I0114 13:19:18.199582 3018 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:19:18.201267 kubelet[3018]: I0114 13:19:18.201241 3018 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:19:18.203610 kubelet[3018]: I0114 13:19:18.203584 3018 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 13:19:18.203686 kubelet[3018]: I0114 13:19:18.203649 3018 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:19:18.216372 kubelet[3018]: E0114 13:19:18.215366 3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-42c09c22a8?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="200ms" Jan 14 13:19:18.216372 kubelet[3018]: E0114 13:19:18.215685 3018 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-42c09c22a8.181a91a844555be6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-42c09c22a8,UID:ci-4152.2.0-a-42c09c22a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-42c09c22a8,},FirstTimestamp:2025-01-14 13:19:18.191725542 +0000 UTC m=+0.940297806,LastTimestamp:2025-01-14 13:19:18.191725542 +0000 UTC m=+0.940297806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-42c09c22a8,}" Jan 14 13:19:18.216372 kubelet[3018]: I0114 13:19:18.216097 3018 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:19:18.216372 kubelet[3018]: I0114 13:19:18.216184 3018 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:19:18.217490 kubelet[3018]: W0114 13:19:18.217442 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.217632 kubelet[3018]: E0114 13:19:18.217618 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.219794 kubelet[3018]: E0114 13:19:18.219765 3018 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:19:18.221402 kubelet[3018]: I0114 13:19:18.221229 3018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:19:18.221624 kubelet[3018]: I0114 13:19:18.221600 3018 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:19:18.223208 kubelet[3018]: I0114 13:19:18.222894 3018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:19:18.223208 kubelet[3018]: I0114 13:19:18.222924 3018 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:19:18.223208 kubelet[3018]: I0114 13:19:18.222945 3018 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 13:19:18.223208 kubelet[3018]: E0114 13:19:18.222986 3018 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:19:18.234503 kubelet[3018]: W0114 13:19:18.234343 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.234503 kubelet[3018]: E0114 13:19:18.234436 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:18.265527 kubelet[3018]: I0114 13:19:18.265504 3018 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:19:18.265678 kubelet[3018]: I0114 13:19:18.265610 3018 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:19:18.265678 kubelet[3018]: I0114 13:19:18.265651 3018 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:19:18.304087 kubelet[3018]: I0114 13:19:18.304046 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.304596 kubelet[3018]: E0114 13:19:18.304574 3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.327509 kubelet[3018]: E0114 13:19:18.323856 3018 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 13:19:18.373523 kubelet[3018]: I0114 13:19:18.373474 3018 policy_none.go:49] "None policy: Start" Jan 14 13:19:18.375498 kubelet[3018]: I0114 13:19:18.374661 3018 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:19:18.375498 kubelet[3018]: I0114 13:19:18.375028 3018 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:19:18.417176 kubelet[3018]: E0114 13:19:18.417003 3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-42c09c22a8?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="400ms" Jan 14 13:19:18.507505 kubelet[3018]: I0114 13:19:18.507463 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.507894 kubelet[3018]: E0114 13:19:18.507857 3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.524045 kubelet[3018]: E0114 13:19:18.524006 3018 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 13:19:18.831105 kubelet[3018]: E0114 13:19:18.820735 3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-42c09c22a8?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="800ms" Jan 14 13:19:18.839141 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:19:18.848530 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:19:18.857950 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:19:18.859606 kubelet[3018]: I0114 13:19:18.859446 3018 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:19:18.859718 kubelet[3018]: I0114 13:19:18.859677 3018 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:19:18.859845 kubelet[3018]: I0114 13:19:18.859803 3018 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:19:18.861541 kubelet[3018]: E0114 13:19:18.861385 3018 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-a-42c09c22a8\" not found" Jan 14 13:19:18.910475 kubelet[3018]: I0114 13:19:18.910434 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.910889 kubelet[3018]: E0114 13:19:18.910851 3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.925195 kubelet[3018]: I0114 13:19:18.925117 3018 topology_manager.go:215] "Topology Admit Handler" podUID="6ad89be5aa2ff7f50b8064d7ce24734a" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.927836 kubelet[3018]: I0114 13:19:18.927637 3018 topology_manager.go:215] "Topology Admit Handler" podUID="3c68d2c96eb75b57b20200319b4171d3" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.929481 kubelet[3018]: I0114 13:19:18.929435 3018 topology_manager.go:215] "Topology Admit Handler" podUID="63d2a693f1c68aa85a969f5bcf61fc63" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:18.939064 systemd[1]: Created slice kubepods-burstable-pod6ad89be5aa2ff7f50b8064d7ce24734a.slice - libcontainer container kubepods-burstable-pod6ad89be5aa2ff7f50b8064d7ce24734a.slice. Jan 14 13:19:18.950341 systemd[1]: Created slice kubepods-burstable-pod3c68d2c96eb75b57b20200319b4171d3.slice - libcontainer container kubepods-burstable-pod3c68d2c96eb75b57b20200319b4171d3.slice. Jan 14 13:19:18.958146 systemd[1]: Created slice kubepods-burstable-pod63d2a693f1c68aa85a969f5bcf61fc63.slice - libcontainer container kubepods-burstable-pod63d2a693f1c68aa85a969f5bcf61fc63.slice. Jan 14 13:19:19.009424 kubelet[3018]: I0114 13:19:19.009376 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ad89be5aa2ff7f50b8064d7ce24734a-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-42c09c22a8\" (UID: \"6ad89be5aa2ff7f50b8064d7ce24734a\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009605 kubelet[3018]: I0114 13:19:19.009449 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ad89be5aa2ff7f50b8064d7ce24734a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-42c09c22a8\" (UID: \"6ad89be5aa2ff7f50b8064d7ce24734a\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009605 kubelet[3018]: I0114 13:19:19.009499 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009605 kubelet[3018]: I0114 13:19:19.009528 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009605 kubelet[3018]: I0114 13:19:19.009556 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63d2a693f1c68aa85a969f5bcf61fc63-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-42c09c22a8\" (UID: \"63d2a693f1c68aa85a969f5bcf61fc63\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009605 kubelet[3018]: I0114 13:19:19.009581 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ad89be5aa2ff7f50b8064d7ce24734a-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-42c09c22a8\" (UID: \"6ad89be5aa2ff7f50b8064d7ce24734a\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009831 kubelet[3018]: I0114 13:19:19.009606 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009831 kubelet[3018]: I0114 13:19:19.009633 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.009831 kubelet[3018]: I0114 13:19:19.009663 3018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.139050 kubelet[3018]: W0114 13:19:19.138884 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.139050 kubelet[3018]: E0114 13:19:19.138970 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.220034 kubelet[3018]: W0114 13:19:19.219951 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.220034 kubelet[3018]: E0114 13:19:19.220041 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.249421 containerd[1761]: time="2025-01-14T13:19:19.249374281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-42c09c22a8,Uid:6ad89be5aa2ff7f50b8064d7ce24734a,Namespace:kube-system,Attempt:0,}" Jan 14 13:19:19.256963 containerd[1761]: time="2025-01-14T13:19:19.256919189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-42c09c22a8,Uid:3c68d2c96eb75b57b20200319b4171d3,Namespace:kube-system,Attempt:0,}" Jan 14 13:19:19.260742 containerd[1761]: time="2025-01-14T13:19:19.260706093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-42c09c22a8,Uid:63d2a693f1c68aa85a969f5bcf61fc63,Namespace:kube-system,Attempt:0,}" Jan 14 13:19:19.393292 kubelet[3018]: W0114 13:19:19.393133 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.393292 kubelet[3018]: E0114 13:19:19.393211 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.622219 kubelet[3018]: E0114 13:19:19.622156 3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-42c09c22a8?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="1.6s" Jan 14 13:19:19.677133 kubelet[3018]: W0114 13:19:19.676979 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.677133 kubelet[3018]: E0114 13:19:19.677057 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:19.713397 kubelet[3018]: I0114 13:19:19.713333 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.713779 kubelet[3018]: E0114 13:19:19.713742 3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:19.932986 kubelet[3018]: E0114 13:19:19.932807 3018 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-42c09c22a8.181a91a844555be6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-42c09c22a8,UID:ci-4152.2.0-a-42c09c22a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-42c09c22a8,},FirstTimestamp:2025-01-14 13:19:18.191725542 +0000 UTC m=+0.940297806,LastTimestamp:2025-01-14 13:19:18.191725542 +0000 UTC m=+0.940297806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-42c09c22a8,}" Jan 14 13:19:20.310099 kubelet[3018]: E0114 13:19:20.309986 3018 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:21.142779 kubelet[3018]: W0114 13:19:21.142730 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:21.142779 kubelet[3018]: E0114 13:19:21.142783 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:21.223673 kubelet[3018]: E0114 13:19:21.223618 3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-42c09c22a8?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="3.2s" Jan 14 13:19:21.240123 kubelet[3018]: W0114 13:19:21.240075 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:21.240123 kubelet[3018]: E0114 13:19:21.240127 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:21.316716 kubelet[3018]: I0114 13:19:21.316678 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:21.317099 kubelet[3018]: E0114 13:19:21.317065 3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:21.472600 kubelet[3018]: W0114 13:19:21.472469 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:21.472600 kubelet[3018]: E0114 13:19:21.472520 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:22.127470 kubelet[3018]: W0114 13:19:22.127411 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:22.127470 kubelet[3018]: E0114 13:19:22.127474 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:24.424921 kubelet[3018]: E0114 13:19:24.424866 3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-42c09c22a8?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="6.4s" Jan 14 13:19:24.519567 kubelet[3018]: I0114 13:19:24.519529 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:24.519955 kubelet[3018]: E0114 13:19:24.519917 3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:24.988114 kubelet[3018]: W0114 13:19:24.634916 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:24.988114 kubelet[3018]: E0114 13:19:24.634957 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:24.988114 kubelet[3018]: E0114 13:19:24.659857 3018 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:25.832321 kubelet[3018]: W0114 13:19:25.832272 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:25.832321 kubelet[3018]: E0114 13:19:25.832327 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:25.857127 kubelet[3018]: W0114 13:19:25.857073 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:25.857127 kubelet[3018]: E0114 13:19:25.857131 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:26.754320 kubelet[3018]: W0114 13:19:26.754271 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:26.754320 kubelet[3018]: E0114 13:19:26.754326 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-42c09c22a8&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:28.861735 kubelet[3018]: E0114 13:19:28.861519 3018 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-a-42c09c22a8\" not found" Jan 14 13:19:29.934305 kubelet[3018]: E0114 13:19:29.934199 3018 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-42c09c22a8.181a91a844555be6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-42c09c22a8,UID:ci-4152.2.0-a-42c09c22a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-42c09c22a8,},FirstTimestamp:2025-01-14 13:19:18.191725542 +0000 UTC m=+0.940297806,LastTimestamp:2025-01-14 13:19:18.191725542 +0000 UTC m=+0.940297806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-42c09c22a8,}" Jan 14 13:19:30.138388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115275562.mount: Deactivated successfully. Jan 14 13:19:30.386625 containerd[1761]: time="2025-01-14T13:19:30.386566312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:19:30.525680 containerd[1761]: time="2025-01-14T13:19:30.525613105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:19:30.588648 containerd[1761]: time="2025-01-14T13:19:30.588566958Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:19:30.684779 containerd[1761]: time="2025-01-14T13:19:30.684577485Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:19:30.731223 containerd[1761]: time="2025-01-14T13:19:30.730588439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:19:30.779515 containerd[1761]: time="2025-01-14T13:19:30.779402776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:19:30.780804 containerd[1761]: time="2025-01-14T13:19:30.780499109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 11.531016724s" Jan 14 13:19:30.823887 containerd[1761]: time="2025-01-14T13:19:30.823813378Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:19:30.826145 kubelet[3018]: E0114 13:19:30.826092 3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-42c09c22a8?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="7s" Jan 14 13:19:30.828305 containerd[1761]: time="2025-01-14T13:19:30.828241907Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:19:30.890239 containerd[1761]: time="2025-01-14T13:19:30.890179312Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 11.633163721s" Jan 14 13:19:30.921897 kubelet[3018]: I0114 13:19:30.921862 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:30.922339 kubelet[3018]: E0114 13:19:30.922276 3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:31.031060 containerd[1761]: time="2025-01-14T13:19:31.030893314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 11.770070718s" Jan 14 13:19:32.385782 containerd[1761]: time="2025-01-14T13:19:32.385520802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:19:32.385782 containerd[1761]: time="2025-01-14T13:19:32.385585704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:19:32.385782 containerd[1761]: time="2025-01-14T13:19:32.385606104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:32.385782 containerd[1761]: time="2025-01-14T13:19:32.385693107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:32.391131 containerd[1761]: time="2025-01-14T13:19:32.390437845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:19:32.391131 containerd[1761]: time="2025-01-14T13:19:32.390500847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:19:32.391131 containerd[1761]: time="2025-01-14T13:19:32.390522248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:32.391131 containerd[1761]: time="2025-01-14T13:19:32.390606550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:32.401120 containerd[1761]: time="2025-01-14T13:19:32.384034559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:19:32.401120 containerd[1761]: time="2025-01-14T13:19:32.400955052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:19:32.401120 containerd[1761]: time="2025-01-14T13:19:32.400983153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:32.405371 containerd[1761]: time="2025-01-14T13:19:32.402942210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:32.436570 systemd[1]: Started cri-containerd-57ad3d8e93b19f7c707b2671a3ce764f17f23093b796f506b7c1a4f253d32d2e.scope - libcontainer container 57ad3d8e93b19f7c707b2671a3ce764f17f23093b796f506b7c1a4f253d32d2e. Jan 14 13:19:32.444971 systemd[1]: Started cri-containerd-58749853025d2cd566eaac072bfb945d04c7111c93921958396db6e35435fbbd.scope - libcontainer container 58749853025d2cd566eaac072bfb945d04c7111c93921958396db6e35435fbbd. Jan 14 13:19:32.449947 systemd[1]: Started cri-containerd-b22cb84a369a789654f33f7451b9cd63ad1f3b794ae8321457e126ada3d8947e.scope - libcontainer container b22cb84a369a789654f33f7451b9cd63ad1f3b794ae8321457e126ada3d8947e. Jan 14 13:19:32.530838 containerd[1761]: time="2025-01-14T13:19:32.530754035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-42c09c22a8,Uid:3c68d2c96eb75b57b20200319b4171d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"58749853025d2cd566eaac072bfb945d04c7111c93921958396db6e35435fbbd\"" Jan 14 13:19:32.544577 containerd[1761]: time="2025-01-14T13:19:32.544461335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-42c09c22a8,Uid:6ad89be5aa2ff7f50b8064d7ce24734a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b22cb84a369a789654f33f7451b9cd63ad1f3b794ae8321457e126ada3d8947e\"" Jan 14 13:19:32.548676 containerd[1761]: time="2025-01-14T13:19:32.548151643Z" level=info msg="CreateContainer within sandbox \"58749853025d2cd566eaac072bfb945d04c7111c93921958396db6e35435fbbd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:19:32.552247 containerd[1761]: time="2025-01-14T13:19:32.551897152Z" level=info msg="CreateContainer within sandbox \"b22cb84a369a789654f33f7451b9cd63ad1f3b794ae8321457e126ada3d8947e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:19:32.553480 containerd[1761]: time="2025-01-14T13:19:32.553454397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-42c09c22a8,Uid:63d2a693f1c68aa85a969f5bcf61fc63,Namespace:kube-system,Attempt:0,} returns sandbox id \"57ad3d8e93b19f7c707b2671a3ce764f17f23093b796f506b7c1a4f253d32d2e\"" Jan 14 13:19:32.559432 containerd[1761]: time="2025-01-14T13:19:32.559280267Z" level=info msg="CreateContainer within sandbox \"57ad3d8e93b19f7c707b2671a3ce764f17f23093b796f506b7c1a4f253d32d2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:19:32.635664 kubelet[3018]: W0114 13:19:32.635582 3018 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:32.635664 kubelet[3018]: E0114 13:19:32.635664 3018 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:32.686132 containerd[1761]: time="2025-01-14T13:19:32.685946159Z" level=info msg="CreateContainer within sandbox \"58749853025d2cd566eaac072bfb945d04c7111c93921958396db6e35435fbbd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8b3bac1e1b4d85de34fb01e38aee5c08901f71129c2eb5f9f58cfe465e64caf\"" Jan 14 13:19:32.687340 containerd[1761]: time="2025-01-14T13:19:32.686958789Z" level=info msg="StartContainer for \"d8b3bac1e1b4d85de34fb01e38aee5c08901f71129c2eb5f9f58cfe465e64caf\"" Jan 14 13:19:32.734739 containerd[1761]: time="2025-01-14T13:19:32.734684080Z" level=info msg="CreateContainer within sandbox \"b22cb84a369a789654f33f7451b9cd63ad1f3b794ae8321457e126ada3d8947e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4232587f0e850ecb0384276fbda62b8b56ff5543c5c4f0eff5cd03d9dcc23393\"" Jan 14 13:19:32.736430 containerd[1761]: time="2025-01-14T13:19:32.736099321Z" level=info msg="StartContainer for \"4232587f0e850ecb0384276fbda62b8b56ff5543c5c4f0eff5cd03d9dcc23393\"" Jan 14 13:19:32.757578 systemd[1]: Started cri-containerd-d8b3bac1e1b4d85de34fb01e38aee5c08901f71129c2eb5f9f58cfe465e64caf.scope - libcontainer container d8b3bac1e1b4d85de34fb01e38aee5c08901f71129c2eb5f9f58cfe465e64caf. Jan 14 13:19:32.781268 kubelet[3018]: E0114 13:19:32.771124 3018 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.13:6443: connect: connection refused Jan 14 13:19:32.782701 containerd[1761]: time="2025-01-14T13:19:32.782562476Z" level=info msg="CreateContainer within sandbox \"57ad3d8e93b19f7c707b2671a3ce764f17f23093b796f506b7c1a4f253d32d2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b21ca9c12eadc2f500944649c0f89efde85707f2312bc846c2125c7240e1f91c\"" Jan 14 13:19:32.785375 containerd[1761]: time="2025-01-14T13:19:32.783514103Z" level=info msg="StartContainer for \"b21ca9c12eadc2f500944649c0f89efde85707f2312bc846c2125c7240e1f91c\"" Jan 14 13:19:32.815235 systemd[1]: Started cri-containerd-4232587f0e850ecb0384276fbda62b8b56ff5543c5c4f0eff5cd03d9dcc23393.scope - libcontainer container 4232587f0e850ecb0384276fbda62b8b56ff5543c5c4f0eff5cd03d9dcc23393. Jan 14 13:19:32.844497 systemd[1]: Started cri-containerd-b21ca9c12eadc2f500944649c0f89efde85707f2312bc846c2125c7240e1f91c.scope - libcontainer container b21ca9c12eadc2f500944649c0f89efde85707f2312bc846c2125c7240e1f91c. Jan 14 13:19:32.854370 containerd[1761]: time="2025-01-14T13:19:32.854309667Z" level=info msg="StartContainer for \"d8b3bac1e1b4d85de34fb01e38aee5c08901f71129c2eb5f9f58cfe465e64caf\" returns successfully" Jan 14 13:19:32.919666 containerd[1761]: time="2025-01-14T13:19:32.919618971Z" level=info msg="StartContainer for \"4232587f0e850ecb0384276fbda62b8b56ff5543c5c4f0eff5cd03d9dcc23393\" returns successfully" Jan 14 13:19:32.946444 containerd[1761]: time="2025-01-14T13:19:32.946303849Z" level=info msg="StartContainer for \"b21ca9c12eadc2f500944649c0f89efde85707f2312bc846c2125c7240e1f91c\" returns successfully" Jan 14 13:19:35.197981 kubelet[3018]: I0114 13:19:35.197700 3018 apiserver.go:52] "Watching apiserver" Jan 14 13:19:35.204097 kubelet[3018]: I0114 13:19:35.204062 3018 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 13:19:35.276427 kubelet[3018]: E0114 13:19:35.276387 3018 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.0-a-42c09c22a8" not found Jan 14 13:19:35.632772 kubelet[3018]: E0114 13:19:35.632719 3018 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.0-a-42c09c22a8" not found Jan 14 13:19:36.066764 kubelet[3018]: E0114 13:19:36.066618 3018 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.0-a-42c09c22a8" not found Jan 14 13:19:36.970787 kubelet[3018]: E0114 13:19:36.970741 3018 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4152.2.0-a-42c09c22a8" not found Jan 14 13:19:37.829928 kubelet[3018]: E0114 13:19:37.829878 3018 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.0-a-42c09c22a8\" not found" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:37.925396 kubelet[3018]: I0114 13:19:37.925333 3018 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:37.933335 kubelet[3018]: I0114 13:19:37.933288 3018 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:38.391274 systemd[1]: Reloading requested from client PID 3292 ('systemctl') (unit session-9.scope)... Jan 14 13:19:38.391291 systemd[1]: Reloading... Jan 14 13:19:38.490415 zram_generator::config[3332]: No configuration found. Jan 14 13:19:38.626856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:19:38.723760 systemd[1]: Reloading finished in 331 ms. Jan 14 13:19:38.776106 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:38.776733 kubelet[3018]: I0114 13:19:38.776708 3018 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:19:38.801502 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:19:38.802140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:38.802321 systemd[1]: kubelet.service: Consumed 1.418s CPU time, 115.5M memory peak, 0B memory swap peak. Jan 14 13:19:38.810605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:19:38.943596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:19:38.963814 (kubelet)[3399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:19:39.003506 kubelet[3399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:19:39.003506 kubelet[3399]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:19:39.003506 kubelet[3399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:19:39.003506 kubelet[3399]: I0114 13:19:39.003428 3399 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:19:39.011213 kubelet[3399]: I0114 13:19:39.011183 3399 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 13:19:39.011393 kubelet[3399]: I0114 13:19:39.011340 3399 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:19:39.011639 kubelet[3399]: I0114 13:19:39.011612 3399 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 13:19:39.012871 kubelet[3399]: I0114 13:19:39.012844 3399 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:19:39.014295 kubelet[3399]: I0114 13:19:39.014137 3399 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:19:39.020235 kubelet[3399]: I0114 13:19:39.020212 3399 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:19:39.020495 kubelet[3399]: I0114 13:19:39.020469 3399 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:19:39.020667 kubelet[3399]: I0114 13:19:39.020493 3399 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.0-a-42c09c22a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:19:39.020796 kubelet[3399]: I0114 13:19:39.020681 3399 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:19:39.020796 kubelet[3399]: I0114 13:19:39.020695 3399 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:19:39.020796 kubelet[3399]: I0114 13:19:39.020745 3399 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:19:39.020921 kubelet[3399]: I0114 13:19:39.020850 3399 kubelet.go:400] "Attempting to sync node with API server" Jan 14 13:19:39.020921 kubelet[3399]: I0114 13:19:39.020864 3399 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:19:39.020921 kubelet[3399]: I0114 13:19:39.020892 3399 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:19:39.020921 kubelet[3399]: I0114 13:19:39.020915 3399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:19:39.025375 kubelet[3399]: I0114 13:19:39.024668 3399 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:19:39.025375 kubelet[3399]: I0114 13:19:39.024846 3399 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:19:39.025375 kubelet[3399]: I0114 13:19:39.025249 3399 server.go:1264] "Started kubelet" Jan 14 13:19:39.029324 kubelet[3399]: I0114 13:19:39.029293 3399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:19:39.040382 kubelet[3399]: I0114 13:19:39.038648 3399 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:19:39.043888 kubelet[3399]: I0114 13:19:39.043860 3399 server.go:455] "Adding debug handlers to kubelet server" Jan 14 13:19:39.045994 kubelet[3399]: I0114 13:19:39.045932 3399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:19:39.046220 kubelet[3399]: I0114 13:19:39.046197 3399 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:19:39.051703 kubelet[3399]: I0114 13:19:39.050422 3399 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:19:39.052381 kubelet[3399]: I0114 13:19:39.052082 3399 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 13:19:39.054040 kubelet[3399]: I0114 13:19:39.054025 3399 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:19:39.056855 kubelet[3399]: I0114 13:19:39.055942 3399 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:19:39.059646 kubelet[3399]: I0114 13:19:39.059603 3399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:19:39.059908 kubelet[3399]: I0114 13:19:39.059876 3399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:19:39.066116 kubelet[3399]: I0114 13:19:39.066078 3399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:19:39.066116 kubelet[3399]: I0114 13:19:39.066122 3399 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:19:39.066301 kubelet[3399]: I0114 13:19:39.066140 3399 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 13:19:39.066301 kubelet[3399]: E0114 13:19:39.066185 3399 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:19:39.073430 kubelet[3399]: I0114 13:19:39.070222 3399 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:19:39.114154 kubelet[3399]: I0114 13:19:39.113837 3399 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:19:39.114154 kubelet[3399]: I0114 13:19:39.113857 3399 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:19:39.114154 kubelet[3399]: I0114 13:19:39.113879 3399 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:19:39.114154 kubelet[3399]: I0114 13:19:39.114067 3399 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:19:39.114154 kubelet[3399]: I0114 13:19:39.114079 3399 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:19:39.114154 kubelet[3399]: I0114 13:19:39.114103 3399 policy_none.go:49] "None policy: Start" Jan 14 13:19:39.115305 kubelet[3399]: I0114 13:19:39.115277 3399 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:19:39.115305 kubelet[3399]: I0114 13:19:39.115312 3399 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:19:39.115712 kubelet[3399]: I0114 13:19:39.115610 3399 state_mem.go:75] "Updated machine memory state" Jan 14 13:19:39.119623 kubelet[3399]: I0114 13:19:39.119598 3399 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:19:39.120306 kubelet[3399]: I0114 13:19:39.119929 3399 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:19:39.120306 kubelet[3399]: I0114 13:19:39.120047 3399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:19:39.155122 kubelet[3399]: I0114 13:19:39.155091 3399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.167091 kubelet[3399]: I0114 13:19:39.167036 3399 topology_manager.go:215] "Topology Admit Handler" podUID="6ad89be5aa2ff7f50b8064d7ce24734a" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.167261 kubelet[3399]: I0114 13:19:39.167146 3399 topology_manager.go:215] "Topology Admit Handler" podUID="3c68d2c96eb75b57b20200319b4171d3" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.167261 kubelet[3399]: I0114 13:19:39.167217 3399 topology_manager.go:215] "Topology Admit Handler" podUID="63d2a693f1c68aa85a969f5bcf61fc63" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489098 kubelet[3399]: I0114 13:19:39.488608 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ad89be5aa2ff7f50b8064d7ce24734a-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-42c09c22a8\" (UID: \"6ad89be5aa2ff7f50b8064d7ce24734a\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489098 kubelet[3399]: I0114 13:19:39.488663 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63d2a693f1c68aa85a969f5bcf61fc63-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-42c09c22a8\" (UID: \"63d2a693f1c68aa85a969f5bcf61fc63\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489098 kubelet[3399]: I0114 13:19:39.488695 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ad89be5aa2ff7f50b8064d7ce24734a-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-42c09c22a8\" (UID: \"6ad89be5aa2ff7f50b8064d7ce24734a\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489098 kubelet[3399]: I0114 13:19:39.488727 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ad89be5aa2ff7f50b8064d7ce24734a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-42c09c22a8\" (UID: \"6ad89be5aa2ff7f50b8064d7ce24734a\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489098 kubelet[3399]: I0114 13:19:39.488758 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489528 kubelet[3399]: I0114 13:19:39.488790 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489528 kubelet[3399]: I0114 13:19:39.488816 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489528 kubelet[3399]: I0114 13:19:39.488844 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.489528 kubelet[3399]: I0114 13:19:39.488875 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c68d2c96eb75b57b20200319b4171d3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-42c09c22a8\" (UID: \"3c68d2c96eb75b57b20200319b4171d3\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.492582 kubelet[3399]: I0114 13:19:39.489897 3399 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.492582 kubelet[3399]: I0114 13:19:39.489981 3399 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-42c09c22a8" Jan 14 13:19:39.512736 kubelet[3399]: W0114 13:19:39.512706 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:19:39.513212 kubelet[3399]: W0114 13:19:39.512930 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:19:39.513794 kubelet[3399]: W0114 13:19:39.512996 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:19:40.022089 kubelet[3399]: I0114 13:19:40.022049 3399 apiserver.go:52] "Watching apiserver" Jan 14 13:19:40.054105 kubelet[3399]: I0114 13:19:40.054052 3399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 13:19:40.111444 kubelet[3399]: I0114 13:19:40.111138 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.0-a-42c09c22a8" podStartSLOduration=1.111116852 podStartE2EDuration="1.111116852s" podCreationTimestamp="2025-01-14 13:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:19:40.111079151 +0000 UTC m=+1.143079136" watchObservedRunningTime="2025-01-14 13:19:40.111116852 +0000 UTC m=+1.143116737" Jan 14 13:19:40.143113 kubelet[3399]: I0114 13:19:40.142936 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.0-a-42c09c22a8" podStartSLOduration=1.142908324 podStartE2EDuration="1.142908324s" podCreationTimestamp="2025-01-14 13:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:19:40.12670478 +0000 UTC m=+1.158704665" watchObservedRunningTime="2025-01-14 13:19:40.142908324 +0000 UTC m=+1.174908209" Jan 14 13:19:40.154278 kubelet[3399]: I0114 13:19:40.154107 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-42c09c22a8" podStartSLOduration=1.15408543 podStartE2EDuration="1.15408543s" podCreationTimestamp="2025-01-14 13:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:19:40.143200532 +0000 UTC m=+1.175200417" watchObservedRunningTime="2025-01-14 13:19:40.15408543 +0000 UTC m=+1.186085515" Jan 14 13:19:41.752662 sudo[3430]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 14 13:19:41.753049 sudo[3430]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 14 13:19:42.291204 sudo[3430]: pam_unix(sudo:session): session closed for user root Jan 14 13:19:43.643038 sudo[2310]: pam_unix(sudo:session): session closed for user root Jan 14 13:19:43.739258 sshd[2309]: Connection closed by 10.200.16.10 port 45146 Jan 14 13:19:43.740052 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Jan 14 13:19:43.745115 systemd[1]: sshd@6-10.200.4.13:22-10.200.16.10:45146.service: Deactivated successfully. Jan 14 13:19:43.747741 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:19:43.748010 systemd[1]: session-9.scope: Consumed 4.642s CPU time, 186.8M memory peak, 0B memory swap peak. Jan 14 13:19:43.748994 systemd-logind[1734]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:19:43.750236 systemd-logind[1734]: Removed session 9. Jan 14 13:19:53.505137 kubelet[3399]: I0114 13:19:53.505075 3399 topology_manager.go:215] "Topology Admit Handler" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" podNamespace="kube-system" podName="cilium-jww2g" Jan 14 13:19:53.514473 kubelet[3399]: I0114 13:19:53.514393 3399 topology_manager.go:215] "Topology Admit Handler" podUID="a38ea366-9df0-43a3-8577-a759f79245d1" podNamespace="kube-system" podName="kube-proxy-vg7cx" Jan 14 13:19:53.529874 systemd[1]: Created slice kubepods-burstable-pod0efbfd67_e438_44c9_be8f_e424b1d930d9.slice - libcontainer container kubepods-burstable-pod0efbfd67_e438_44c9_be8f_e424b1d930d9.slice. Jan 14 13:19:53.542343 systemd[1]: Created slice kubepods-besteffort-poda38ea366_9df0_43a3_8577_a759f79245d1.slice - libcontainer container kubepods-besteffort-poda38ea366_9df0_43a3_8577_a759f79245d1.slice. Jan 14 13:19:53.547921 kubelet[3399]: I0114 13:19:53.547891 3399 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:19:53.548625 containerd[1761]: time="2025-01-14T13:19:53.548580184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:19:53.548998 kubelet[3399]: I0114 13:19:53.548848 3399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:19:53.583177 kubelet[3399]: I0114 13:19:53.583126 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a38ea366-9df0-43a3-8577-a759f79245d1-kube-proxy\") pod \"kube-proxy-vg7cx\" (UID: \"a38ea366-9df0-43a3-8577-a759f79245d1\") " pod="kube-system/kube-proxy-vg7cx" Jan 14 13:19:53.583364 kubelet[3399]: I0114 13:19:53.583182 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-cgroup\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583364 kubelet[3399]: I0114 13:19:53.583210 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-config-path\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583364 kubelet[3399]: I0114 13:19:53.583229 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-bpf-maps\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583364 kubelet[3399]: I0114 13:19:53.583247 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-xtables-lock\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583364 kubelet[3399]: I0114 13:19:53.583265 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0efbfd67-e438-44c9-be8f-e424b1d930d9-clustermesh-secrets\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583364 kubelet[3399]: I0114 13:19:53.583283 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-hubble-tls\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583618 kubelet[3399]: I0114 13:19:53.583301 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8d2v\" (UniqueName: \"kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-kube-api-access-p8d2v\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583618 kubelet[3399]: I0114 13:19:53.583324 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a38ea366-9df0-43a3-8577-a759f79245d1-xtables-lock\") pod \"kube-proxy-vg7cx\" (UID: \"a38ea366-9df0-43a3-8577-a759f79245d1\") " pod="kube-system/kube-proxy-vg7cx" Jan 14 13:19:53.583618 kubelet[3399]: I0114 13:19:53.583345 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-hostproc\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583618 kubelet[3399]: I0114 13:19:53.583379 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-net\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583618 kubelet[3399]: I0114 13:19:53.583401 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-etc-cni-netd\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583618 kubelet[3399]: I0114 13:19:53.583422 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-lib-modules\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583856 kubelet[3399]: I0114 13:19:53.583445 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-kernel\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583856 kubelet[3399]: I0114 13:19:53.583470 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxm9r\" (UniqueName: \"kubernetes.io/projected/a38ea366-9df0-43a3-8577-a759f79245d1-kube-api-access-dxm9r\") pod \"kube-proxy-vg7cx\" (UID: \"a38ea366-9df0-43a3-8577-a759f79245d1\") " pod="kube-system/kube-proxy-vg7cx" Jan 14 13:19:53.583856 kubelet[3399]: I0114 13:19:53.583493 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cni-path\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.583856 kubelet[3399]: I0114 13:19:53.583517 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a38ea366-9df0-43a3-8577-a759f79245d1-lib-modules\") pod \"kube-proxy-vg7cx\" (UID: \"a38ea366-9df0-43a3-8577-a759f79245d1\") " pod="kube-system/kube-proxy-vg7cx" Jan 14 13:19:53.583856 kubelet[3399]: I0114 13:19:53.583542 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-run\") pod \"cilium-jww2g\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " pod="kube-system/cilium-jww2g" Jan 14 13:19:53.700363 kubelet[3399]: I0114 13:19:53.699708 3399 topology_manager.go:215] "Topology Admit Handler" podUID="4966ff7a-a480-4a2e-a3db-7dda051dd884" podNamespace="kube-system" podName="cilium-operator-599987898-qj9ln" Jan 14 13:19:53.720649 systemd[1]: Created slice kubepods-besteffort-pod4966ff7a_a480_4a2e_a3db_7dda051dd884.slice - libcontainer container kubepods-besteffort-pod4966ff7a_a480_4a2e_a3db_7dda051dd884.slice. Jan 14 13:19:53.786190 kubelet[3399]: I0114 13:19:53.785955 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvgm\" (UniqueName: \"kubernetes.io/projected/4966ff7a-a480-4a2e-a3db-7dda051dd884-kube-api-access-hzvgm\") pod \"cilium-operator-599987898-qj9ln\" (UID: \"4966ff7a-a480-4a2e-a3db-7dda051dd884\") " pod="kube-system/cilium-operator-599987898-qj9ln" Jan 14 13:19:53.787535 kubelet[3399]: I0114 13:19:53.787441 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4966ff7a-a480-4a2e-a3db-7dda051dd884-cilium-config-path\") pod \"cilium-operator-599987898-qj9ln\" (UID: \"4966ff7a-a480-4a2e-a3db-7dda051dd884\") " pod="kube-system/cilium-operator-599987898-qj9ln" Jan 14 13:19:53.836686 containerd[1761]: time="2025-01-14T13:19:53.836641280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jww2g,Uid:0efbfd67-e438-44c9-be8f-e424b1d930d9,Namespace:kube-system,Attempt:0,}" Jan 14 13:19:53.851430 containerd[1761]: time="2025-01-14T13:19:53.851388684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg7cx,Uid:a38ea366-9df0-43a3-8577-a759f79245d1,Namespace:kube-system,Attempt:0,}" Jan 14 13:19:53.907404 containerd[1761]: time="2025-01-14T13:19:53.907221014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:19:53.907404 containerd[1761]: time="2025-01-14T13:19:53.907291416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:19:53.907404 containerd[1761]: time="2025-01-14T13:19:53.907311417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:53.908105 containerd[1761]: time="2025-01-14T13:19:53.907841631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:53.931868 containerd[1761]: time="2025-01-14T13:19:53.931742586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:19:53.931868 containerd[1761]: time="2025-01-14T13:19:53.931815888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:19:53.931868 containerd[1761]: time="2025-01-14T13:19:53.931839489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:53.932463 containerd[1761]: time="2025-01-14T13:19:53.931940392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:53.934613 systemd[1]: Started cri-containerd-79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e.scope - libcontainer container 79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e. Jan 14 13:19:53.962698 systemd[1]: Started cri-containerd-03a37bdaec1c391e3e60614d24559fb4603ff2dee662035f70fed0b66ce90910.scope - libcontainer container 03a37bdaec1c391e3e60614d24559fb4603ff2dee662035f70fed0b66ce90910. Jan 14 13:19:53.972384 containerd[1761]: time="2025-01-14T13:19:53.972248296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jww2g,Uid:0efbfd67-e438-44c9-be8f-e424b1d930d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\"" Jan 14 13:19:53.976488 containerd[1761]: time="2025-01-14T13:19:53.976451812Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 14 13:19:53.996885 containerd[1761]: time="2025-01-14T13:19:53.996821670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg7cx,Uid:a38ea366-9df0-43a3-8577-a759f79245d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"03a37bdaec1c391e3e60614d24559fb4603ff2dee662035f70fed0b66ce90910\"" Jan 14 13:19:54.001275 containerd[1761]: time="2025-01-14T13:19:54.001220791Z" level=info msg="CreateContainer within sandbox \"03a37bdaec1c391e3e60614d24559fb4603ff2dee662035f70fed0b66ce90910\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:19:54.025363 containerd[1761]: time="2025-01-14T13:19:54.025297850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qj9ln,Uid:4966ff7a-a480-4a2e-a3db-7dda051dd884,Namespace:kube-system,Attempt:0,}" Jan 14 13:19:54.038125 containerd[1761]: time="2025-01-14T13:19:54.037911896Z" level=info msg="CreateContainer within sandbox \"03a37bdaec1c391e3e60614d24559fb4603ff2dee662035f70fed0b66ce90910\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ba9da9a916a13d14dcd499d9878ef47d3b0e9b1bfa0da0609049a10db5c15ea\"" Jan 14 13:19:54.040660 containerd[1761]: time="2025-01-14T13:19:54.039035427Z" level=info msg="StartContainer for \"1ba9da9a916a13d14dcd499d9878ef47d3b0e9b1bfa0da0609049a10db5c15ea\"" Jan 14 13:19:54.068568 systemd[1]: Started cri-containerd-1ba9da9a916a13d14dcd499d9878ef47d3b0e9b1bfa0da0609049a10db5c15ea.scope - libcontainer container 1ba9da9a916a13d14dcd499d9878ef47d3b0e9b1bfa0da0609049a10db5c15ea. Jan 14 13:19:54.097149 containerd[1761]: time="2025-01-14T13:19:54.097000716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:19:54.097149 containerd[1761]: time="2025-01-14T13:19:54.097073418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:19:54.097398 containerd[1761]: time="2025-01-14T13:19:54.097151320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:54.097633 containerd[1761]: time="2025-01-14T13:19:54.097335725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:19:54.126689 systemd[1]: Started cri-containerd-6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed.scope - libcontainer container 6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed. Jan 14 13:19:54.133418 containerd[1761]: time="2025-01-14T13:19:54.133370513Z" level=info msg="StartContainer for \"1ba9da9a916a13d14dcd499d9878ef47d3b0e9b1bfa0da0609049a10db5c15ea\" returns successfully" Jan 14 13:19:54.187070 containerd[1761]: time="2025-01-14T13:19:54.186943881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qj9ln,Uid:4966ff7a-a480-4a2e-a3db-7dda051dd884,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed\"" Jan 14 13:19:55.137617 kubelet[3399]: I0114 13:19:55.137505 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vg7cx" podStartSLOduration=2.137482493 podStartE2EDuration="2.137482493s" podCreationTimestamp="2025-01-14 13:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:19:55.137131784 +0000 UTC m=+16.169131669" watchObservedRunningTime="2025-01-14 13:19:55.137482493 +0000 UTC m=+16.169482478" Jan 14 13:19:59.801040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3605248735.mount: Deactivated successfully. Jan 14 13:20:02.032546 containerd[1761]: time="2025-01-14T13:20:02.032468210Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:02.040818 containerd[1761]: time="2025-01-14T13:20:02.040650342Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735331" Jan 14 13:20:02.045307 containerd[1761]: time="2025-01-14T13:20:02.043049110Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:02.045307 containerd[1761]: time="2025-01-14T13:20:02.044917663Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.06842185s" Jan 14 13:20:02.045307 containerd[1761]: time="2025-01-14T13:20:02.045019566Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 14 13:20:02.047423 containerd[1761]: time="2025-01-14T13:20:02.047393334Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 14 13:20:02.048658 containerd[1761]: time="2025-01-14T13:20:02.048623069Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:20:02.091169 containerd[1761]: time="2025-01-14T13:20:02.091106674Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\"" Jan 14 13:20:02.093016 containerd[1761]: time="2025-01-14T13:20:02.091749192Z" level=info msg="StartContainer for \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\"" Jan 14 13:20:02.128539 systemd[1]: Started cri-containerd-caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549.scope - libcontainer container caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549. Jan 14 13:20:02.159786 containerd[1761]: time="2025-01-14T13:20:02.159632319Z" level=info msg="StartContainer for \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\" returns successfully" Jan 14 13:20:02.169203 systemd[1]: cri-containerd-caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549.scope: Deactivated successfully. Jan 14 13:20:03.077222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549-rootfs.mount: Deactivated successfully. Jan 14 13:20:05.885052 containerd[1761]: time="2025-01-14T13:20:05.884976141Z" level=info msg="shim disconnected" id=caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549 namespace=k8s.io Jan 14 13:20:05.885052 containerd[1761]: time="2025-01-14T13:20:05.885040542Z" level=warning msg="cleaning up after shim disconnected" id=caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549 namespace=k8s.io Jan 14 13:20:05.885052 containerd[1761]: time="2025-01-14T13:20:05.885051242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:06.156766 containerd[1761]: time="2025-01-14T13:20:06.156629529Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:20:06.194014 containerd[1761]: time="2025-01-14T13:20:06.193964077Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\"" Jan 14 13:20:06.194685 containerd[1761]: time="2025-01-14T13:20:06.194588494Z" level=info msg="StartContainer for \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\"" Jan 14 13:20:06.250566 systemd[1]: Started cri-containerd-d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca.scope - libcontainer container d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca. Jan 14 13:20:06.282542 containerd[1761]: time="2025-01-14T13:20:06.282496761Z" level=info msg="StartContainer for \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\" returns successfully" Jan 14 13:20:06.296761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:20:06.297114 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:20:06.297200 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:20:06.305816 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:20:06.306064 systemd[1]: cri-containerd-d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca.scope: Deactivated successfully. Jan 14 13:20:06.326332 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:20:06.332680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca-rootfs.mount: Deactivated successfully. Jan 14 13:20:06.344338 containerd[1761]: time="2025-01-14T13:20:06.344276095Z" level=info msg="shim disconnected" id=d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca namespace=k8s.io Jan 14 13:20:06.344338 containerd[1761]: time="2025-01-14T13:20:06.344337497Z" level=warning msg="cleaning up after shim disconnected" id=d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca namespace=k8s.io Jan 14 13:20:06.344559 containerd[1761]: time="2025-01-14T13:20:06.344369997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:07.159901 containerd[1761]: time="2025-01-14T13:20:07.159638477Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:20:07.202533 containerd[1761]: time="2025-01-14T13:20:07.202482379Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\"" Jan 14 13:20:07.203784 containerd[1761]: time="2025-01-14T13:20:07.203023494Z" level=info msg="StartContainer for \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\"" Jan 14 13:20:07.240918 systemd[1]: Started cri-containerd-565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe.scope - libcontainer container 565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe. Jan 14 13:20:07.287693 systemd[1]: cri-containerd-565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe.scope: Deactivated successfully. Jan 14 13:20:07.292872 containerd[1761]: time="2025-01-14T13:20:07.292739112Z" level=info msg="StartContainer for \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\" returns successfully" Jan 14 13:20:07.335039 containerd[1761]: time="2025-01-14T13:20:07.334970697Z" level=info msg="shim disconnected" id=565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe namespace=k8s.io Jan 14 13:20:07.335039 containerd[1761]: time="2025-01-14T13:20:07.335034999Z" level=warning msg="cleaning up after shim disconnected" id=565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe namespace=k8s.io Jan 14 13:20:07.335039 containerd[1761]: time="2025-01-14T13:20:07.335045799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:07.829955 containerd[1761]: time="2025-01-14T13:20:07.829903187Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:07.831833 containerd[1761]: time="2025-01-14T13:20:07.831767239Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907233" Jan 14 13:20:07.836619 containerd[1761]: time="2025-01-14T13:20:07.836558274Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:20:07.838070 containerd[1761]: time="2025-01-14T13:20:07.837920012Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.790379674s" Jan 14 13:20:07.838070 containerd[1761]: time="2025-01-14T13:20:07.837957113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 14 13:20:07.840908 containerd[1761]: time="2025-01-14T13:20:07.840762692Z" level=info msg="CreateContainer within sandbox \"6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 14 13:20:07.881476 containerd[1761]: time="2025-01-14T13:20:07.881421333Z" level=info msg="CreateContainer within sandbox \"6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\"" Jan 14 13:20:07.882391 containerd[1761]: time="2025-01-14T13:20:07.881950347Z" level=info msg="StartContainer for \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\"" Jan 14 13:20:07.908555 systemd[1]: Started cri-containerd-3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a.scope - libcontainer container 3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a. Jan 14 13:20:07.936244 containerd[1761]: time="2025-01-14T13:20:07.936193670Z" level=info msg="StartContainer for \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\" returns successfully" Jan 14 13:20:08.166902 containerd[1761]: time="2025-01-14T13:20:08.166548934Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:20:08.190477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe-rootfs.mount: Deactivated successfully. Jan 14 13:20:08.208806 containerd[1761]: time="2025-01-14T13:20:08.208653216Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\"" Jan 14 13:20:08.211372 containerd[1761]: time="2025-01-14T13:20:08.209705745Z" level=info msg="StartContainer for \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\"" Jan 14 13:20:08.271546 systemd[1]: Started cri-containerd-98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63.scope - libcontainer container 98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63. Jan 14 13:20:08.335956 systemd[1]: cri-containerd-98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63.scope: Deactivated successfully. Jan 14 13:20:08.336847 containerd[1761]: time="2025-01-14T13:20:08.336807412Z" level=info msg="StartContainer for \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\" returns successfully" Jan 14 13:20:08.797975 containerd[1761]: time="2025-01-14T13:20:08.797892052Z" level=info msg="shim disconnected" id=98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63 namespace=k8s.io Jan 14 13:20:08.797975 containerd[1761]: time="2025-01-14T13:20:08.797970554Z" level=warning msg="cleaning up after shim disconnected" id=98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63 namespace=k8s.io Jan 14 13:20:08.797975 containerd[1761]: time="2025-01-14T13:20:08.797981155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:20:08.814754 containerd[1761]: time="2025-01-14T13:20:08.814679123Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:20:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:20:09.186087 containerd[1761]: time="2025-01-14T13:20:09.186034745Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:20:09.188547 systemd[1]: run-containerd-runc-k8s.io-98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63-runc.kD58ms.mount: Deactivated successfully. Jan 14 13:20:09.188939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63-rootfs.mount: Deactivated successfully. Jan 14 13:20:09.220063 kubelet[3399]: I0114 13:20:09.219847 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qj9ln" podStartSLOduration=2.5715763369999998 podStartE2EDuration="16.219824493s" podCreationTimestamp="2025-01-14 13:19:53 +0000 UTC" firstStartedPulling="2025-01-14 13:19:54.190584081 +0000 UTC m=+15.222584066" lastFinishedPulling="2025-01-14 13:20:07.838832337 +0000 UTC m=+28.870832222" observedRunningTime="2025-01-14 13:20:08.287248022 +0000 UTC m=+29.319248007" watchObservedRunningTime="2025-01-14 13:20:09.219824493 +0000 UTC m=+30.251824478" Jan 14 13:20:09.244965 containerd[1761]: time="2025-01-14T13:20:09.244913697Z" level=info msg="CreateContainer within sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\"" Jan 14 13:20:09.245583 containerd[1761]: time="2025-01-14T13:20:09.245554015Z" level=info msg="StartContainer for \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\"" Jan 14 13:20:09.282545 systemd[1]: Started cri-containerd-2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99.scope - libcontainer container 2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99. Jan 14 13:20:09.319574 containerd[1761]: time="2025-01-14T13:20:09.319435388Z" level=info msg="StartContainer for \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\" returns successfully" Jan 14 13:20:09.472184 kubelet[3399]: I0114 13:20:09.471202 3399 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:20:09.515576 kubelet[3399]: I0114 13:20:09.515515 3399 topology_manager.go:215] "Topology Admit Handler" podUID="2be76941-502b-40cf-89dd-cc1843fea985" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cf8jg" Jan 14 13:20:09.521636 kubelet[3399]: I0114 13:20:09.519399 3399 topology_manager.go:215] "Topology Admit Handler" podUID="b6e95c4e-f981-4d4f-b3f9-77979a7e722e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9h6th" Jan 14 13:20:09.525983 systemd[1]: Created slice kubepods-burstable-pod2be76941_502b_40cf_89dd_cc1843fea985.slice - libcontainer container kubepods-burstable-pod2be76941_502b_40cf_89dd_cc1843fea985.slice. Jan 14 13:20:09.538874 systemd[1]: Created slice kubepods-burstable-podb6e95c4e_f981_4d4f_b3f9_77979a7e722e.slice - libcontainer container kubepods-burstable-podb6e95c4e_f981_4d4f_b3f9_77979a7e722e.slice. Jan 14 13:20:09.596025 kubelet[3399]: I0114 13:20:09.595874 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e95c4e-f981-4d4f-b3f9-77979a7e722e-config-volume\") pod \"coredns-7db6d8ff4d-9h6th\" (UID: \"b6e95c4e-f981-4d4f-b3f9-77979a7e722e\") " pod="kube-system/coredns-7db6d8ff4d-9h6th" Jan 14 13:20:09.596025 kubelet[3399]: I0114 13:20:09.595923 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2be76941-502b-40cf-89dd-cc1843fea985-config-volume\") pod \"coredns-7db6d8ff4d-cf8jg\" (UID: \"2be76941-502b-40cf-89dd-cc1843fea985\") " pod="kube-system/coredns-7db6d8ff4d-cf8jg" Jan 14 13:20:09.596025 kubelet[3399]: I0114 13:20:09.595959 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh997\" (UniqueName: \"kubernetes.io/projected/b6e95c4e-f981-4d4f-b3f9-77979a7e722e-kube-api-access-kh997\") pod \"coredns-7db6d8ff4d-9h6th\" (UID: \"b6e95c4e-f981-4d4f-b3f9-77979a7e722e\") " pod="kube-system/coredns-7db6d8ff4d-9h6th" Jan 14 13:20:09.596025 kubelet[3399]: I0114 13:20:09.595979 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tv6b\" (UniqueName: \"kubernetes.io/projected/2be76941-502b-40cf-89dd-cc1843fea985-kube-api-access-2tv6b\") pod \"coredns-7db6d8ff4d-cf8jg\" (UID: \"2be76941-502b-40cf-89dd-cc1843fea985\") " pod="kube-system/coredns-7db6d8ff4d-cf8jg" Jan 14 13:20:09.834427 containerd[1761]: time="2025-01-14T13:20:09.833992129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cf8jg,Uid:2be76941-502b-40cf-89dd-cc1843fea985,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:09.844167 containerd[1761]: time="2025-01-14T13:20:09.843717002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9h6th,Uid:b6e95c4e-f981-4d4f-b3f9-77979a7e722e,Namespace:kube-system,Attempt:0,}" Jan 14 13:20:10.202775 systemd[1]: run-containerd-runc-k8s.io-2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99-runc.WfxhBa.mount: Deactivated successfully. Jan 14 13:20:11.554857 systemd-networkd[1501]: cilium_host: Link UP Jan 14 13:20:11.555048 systemd-networkd[1501]: cilium_net: Link UP Jan 14 13:20:11.555053 systemd-networkd[1501]: cilium_net: Gained carrier Jan 14 13:20:11.555270 systemd-networkd[1501]: cilium_host: Gained carrier Jan 14 13:20:11.555516 systemd-networkd[1501]: cilium_host: Gained IPv6LL Jan 14 13:20:11.750059 systemd-networkd[1501]: cilium_vxlan: Link UP Jan 14 13:20:11.750070 systemd-networkd[1501]: cilium_vxlan: Gained carrier Jan 14 13:20:11.791562 systemd-networkd[1501]: cilium_net: Gained IPv6LL Jan 14 13:20:12.098400 kernel: NET: Registered PF_ALG protocol family Jan 14 13:20:12.783536 systemd-networkd[1501]: cilium_vxlan: Gained IPv6LL Jan 14 13:20:12.842830 systemd-networkd[1501]: lxc_health: Link UP Jan 14 13:20:12.860799 systemd-networkd[1501]: lxc_health: Gained carrier Jan 14 13:20:13.442280 systemd-networkd[1501]: lxc22984dee61f0: Link UP Jan 14 13:20:13.453410 kernel: eth0: renamed from tmp66e67 Jan 14 13:20:13.462420 systemd-networkd[1501]: lxc22984dee61f0: Gained carrier Jan 14 13:20:13.487065 systemd-networkd[1501]: lxc98358b8fb3da: Link UP Jan 14 13:20:13.494389 kernel: eth0: renamed from tmp4c09a Jan 14 13:20:13.499399 systemd-networkd[1501]: lxc98358b8fb3da: Gained carrier Jan 14 13:20:13.870933 kubelet[3399]: I0114 13:20:13.870843 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jww2g" podStartSLOduration=12.798483794 podStartE2EDuration="20.870818754s" podCreationTimestamp="2025-01-14 13:19:53 +0000 UTC" firstStartedPulling="2025-01-14 13:19:53.974290952 +0000 UTC m=+15.006290837" lastFinishedPulling="2025-01-14 13:20:02.046625912 +0000 UTC m=+23.078625797" observedRunningTime="2025-01-14 13:20:10.230501956 +0000 UTC m=+31.262501841" watchObservedRunningTime="2025-01-14 13:20:13.870818754 +0000 UTC m=+34.902818639" Jan 14 13:20:13.999511 systemd-networkd[1501]: lxc_health: Gained IPv6LL Jan 14 13:20:15.343631 systemd-networkd[1501]: lxc22984dee61f0: Gained IPv6LL Jan 14 13:20:15.473562 systemd-networkd[1501]: lxc98358b8fb3da: Gained IPv6LL Jan 14 13:20:17.213429 containerd[1761]: time="2025-01-14T13:20:17.211546650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:17.213429 containerd[1761]: time="2025-01-14T13:20:17.211618051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:17.213429 containerd[1761]: time="2025-01-14T13:20:17.211640952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:17.213429 containerd[1761]: time="2025-01-14T13:20:17.211731754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:17.259640 systemd[1]: Started cri-containerd-4c09a004f7d648c31f57bd3040a046431b50b869094929487e01f790bf11e7a8.scope - libcontainer container 4c09a004f7d648c31f57bd3040a046431b50b869094929487e01f790bf11e7a8. Jan 14 13:20:17.282474 containerd[1761]: time="2025-01-14T13:20:17.282364949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:20:17.282701 containerd[1761]: time="2025-01-14T13:20:17.282490252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:20:17.282701 containerd[1761]: time="2025-01-14T13:20:17.282535154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:17.282849 containerd[1761]: time="2025-01-14T13:20:17.282714258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:20:17.319558 systemd[1]: Started cri-containerd-66e6719bc68b539a10a2d9f3e4d362d766ad94a236c2b2505409818d9ba5af22.scope - libcontainer container 66e6719bc68b539a10a2d9f3e4d362d766ad94a236c2b2505409818d9ba5af22. Jan 14 13:20:17.380957 containerd[1761]: time="2025-01-14T13:20:17.380830751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9h6th,Uid:b6e95c4e-f981-4d4f-b3f9-77979a7e722e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c09a004f7d648c31f57bd3040a046431b50b869094929487e01f790bf11e7a8\"" Jan 14 13:20:17.387728 containerd[1761]: time="2025-01-14T13:20:17.387687026Z" level=info msg="CreateContainer within sandbox \"4c09a004f7d648c31f57bd3040a046431b50b869094929487e01f790bf11e7a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:20:17.415065 containerd[1761]: time="2025-01-14T13:20:17.415002320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cf8jg,Uid:2be76941-502b-40cf-89dd-cc1843fea985,Namespace:kube-system,Attempt:0,} returns sandbox id \"66e6719bc68b539a10a2d9f3e4d362d766ad94a236c2b2505409818d9ba5af22\"" Jan 14 13:20:17.422464 containerd[1761]: time="2025-01-14T13:20:17.422110200Z" level=info msg="CreateContainer within sandbox \"66e6719bc68b539a10a2d9f3e4d362d766ad94a236c2b2505409818d9ba5af22\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:20:17.438824 containerd[1761]: time="2025-01-14T13:20:17.438551118Z" level=info msg="CreateContainer within sandbox \"4c09a004f7d648c31f57bd3040a046431b50b869094929487e01f790bf11e7a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74e1c10f4b709d8346bdddfc220deb14193dff0a9c5ea548363c495063d13c8d\"" Jan 14 13:20:17.445866 containerd[1761]: time="2025-01-14T13:20:17.444810877Z" level=info msg="StartContainer for \"74e1c10f4b709d8346bdddfc220deb14193dff0a9c5ea548363c495063d13c8d\"" Jan 14 13:20:17.477216 containerd[1761]: time="2025-01-14T13:20:17.476912993Z" level=info msg="CreateContainer within sandbox \"66e6719bc68b539a10a2d9f3e4d362d766ad94a236c2b2505409818d9ba5af22\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d576330a2f5f00630337581390fa06733d1a6af5ec75ce26c7baabd7a926158\"" Jan 14 13:20:17.480251 containerd[1761]: time="2025-01-14T13:20:17.478870143Z" level=info msg="StartContainer for \"9d576330a2f5f00630337581390fa06733d1a6af5ec75ce26c7baabd7a926158\"" Jan 14 13:20:17.490831 systemd[1]: Started cri-containerd-74e1c10f4b709d8346bdddfc220deb14193dff0a9c5ea548363c495063d13c8d.scope - libcontainer container 74e1c10f4b709d8346bdddfc220deb14193dff0a9c5ea548363c495063d13c8d. Jan 14 13:20:17.526611 systemd[1]: Started cri-containerd-9d576330a2f5f00630337581390fa06733d1a6af5ec75ce26c7baabd7a926158.scope - libcontainer container 9d576330a2f5f00630337581390fa06733d1a6af5ec75ce26c7baabd7a926158. Jan 14 13:20:17.548603 containerd[1761]: time="2025-01-14T13:20:17.548552114Z" level=info msg="StartContainer for \"74e1c10f4b709d8346bdddfc220deb14193dff0a9c5ea548363c495063d13c8d\" returns successfully" Jan 14 13:20:17.572413 containerd[1761]: time="2025-01-14T13:20:17.572343118Z" level=info msg="StartContainer for \"9d576330a2f5f00630337581390fa06733d1a6af5ec75ce26c7baabd7a926158\" returns successfully" Jan 14 13:20:18.222991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245464466.mount: Deactivated successfully. Jan 14 13:20:18.247179 kubelet[3399]: I0114 13:20:18.246815 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cf8jg" podStartSLOduration=25.246786957 podStartE2EDuration="25.246786957s" podCreationTimestamp="2025-01-14 13:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:20:18.231088159 +0000 UTC m=+39.263088044" watchObservedRunningTime="2025-01-14 13:20:18.246786957 +0000 UTC m=+39.278786842" Jan 14 13:21:39.583794 systemd[1]: Started sshd@7-10.200.4.13:22-10.200.16.10:37262.service - OpenSSH per-connection server daemon (10.200.16.10:37262). Jan 14 13:21:40.194435 sshd[4780]: Accepted publickey for core from 10.200.16.10 port 37262 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:40.196433 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:40.201834 systemd-logind[1734]: New session 10 of user core. Jan 14 13:21:40.208706 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:21:40.696117 sshd[4782]: Connection closed by 10.200.16.10 port 37262 Jan 14 13:21:40.697000 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:40.701541 systemd[1]: sshd@7-10.200.4.13:22-10.200.16.10:37262.service: Deactivated successfully. Jan 14 13:21:40.703894 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:21:40.704835 systemd-logind[1734]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:21:40.705843 systemd-logind[1734]: Removed session 10. Jan 14 13:21:45.809684 systemd[1]: Started sshd@8-10.200.4.13:22-10.200.16.10:37270.service - OpenSSH per-connection server daemon (10.200.16.10:37270). Jan 14 13:21:46.418938 sshd[4794]: Accepted publickey for core from 10.200.16.10 port 37270 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:46.420535 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:46.425400 systemd-logind[1734]: New session 11 of user core. Jan 14 13:21:46.430531 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:21:46.911546 sshd[4796]: Connection closed by 10.200.16.10 port 37270 Jan 14 13:21:46.912454 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:46.916982 systemd[1]: sshd@8-10.200.4.13:22-10.200.16.10:37270.service: Deactivated successfully. Jan 14 13:21:46.920933 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:21:46.922441 systemd-logind[1734]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:21:46.923870 systemd-logind[1734]: Removed session 11. Jan 14 13:21:52.024679 systemd[1]: Started sshd@9-10.200.4.13:22-10.200.16.10:51552.service - OpenSSH per-connection server daemon (10.200.16.10:51552). Jan 14 13:21:52.638515 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 51552 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:52.639948 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:52.643897 systemd-logind[1734]: New session 12 of user core. Jan 14 13:21:52.648558 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:21:53.134380 sshd[4810]: Connection closed by 10.200.16.10 port 51552 Jan 14 13:21:53.135221 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:53.138704 systemd[1]: sshd@9-10.200.4.13:22-10.200.16.10:51552.service: Deactivated successfully. Jan 14 13:21:53.141434 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:21:53.143651 systemd-logind[1734]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:21:53.145174 systemd-logind[1734]: Removed session 12. Jan 14 13:21:58.248717 systemd[1]: Started sshd@10-10.200.4.13:22-10.200.16.10:46396.service - OpenSSH per-connection server daemon (10.200.16.10:46396). Jan 14 13:21:58.854029 sshd[4824]: Accepted publickey for core from 10.200.16.10 port 46396 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:21:58.855588 sshd-session[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:21:58.860615 systemd-logind[1734]: New session 13 of user core. Jan 14 13:21:58.865499 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:21:59.346484 sshd[4826]: Connection closed by 10.200.16.10 port 46396 Jan 14 13:21:59.348303 sshd-session[4824]: pam_unix(sshd:session): session closed for user core Jan 14 13:21:59.351116 systemd[1]: sshd@10-10.200.4.13:22-10.200.16.10:46396.service: Deactivated successfully. Jan 14 13:21:59.355215 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:21:59.358179 systemd-logind[1734]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:21:59.360123 systemd-logind[1734]: Removed session 13. Jan 14 13:22:04.456651 systemd[1]: Started sshd@11-10.200.4.13:22-10.200.16.10:46402.service - OpenSSH per-connection server daemon (10.200.16.10:46402). Jan 14 13:22:05.064311 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 46402 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:05.065984 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:05.074918 systemd-logind[1734]: New session 14 of user core. Jan 14 13:22:05.080512 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:22:05.552014 sshd[4839]: Connection closed by 10.200.16.10 port 46402 Jan 14 13:22:05.552874 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:05.555898 systemd[1]: sshd@11-10.200.4.13:22-10.200.16.10:46402.service: Deactivated successfully. Jan 14 13:22:05.558212 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:22:05.559709 systemd-logind[1734]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:22:05.560980 systemd-logind[1734]: Removed session 14. Jan 14 13:22:05.663516 systemd[1]: Started sshd@12-10.200.4.13:22-10.200.16.10:46416.service - OpenSSH per-connection server daemon (10.200.16.10:46416). Jan 14 13:22:06.289402 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 46416 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:06.289556 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:06.295161 systemd-logind[1734]: New session 15 of user core. Jan 14 13:22:06.301524 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:22:06.827287 sshd[4853]: Connection closed by 10.200.16.10 port 46416 Jan 14 13:22:06.828100 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:06.831685 systemd[1]: sshd@12-10.200.4.13:22-10.200.16.10:46416.service: Deactivated successfully. Jan 14 13:22:06.834128 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:22:06.836022 systemd-logind[1734]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:22:06.837382 systemd-logind[1734]: Removed session 15. Jan 14 13:22:06.938068 systemd[1]: Started sshd@13-10.200.4.13:22-10.200.16.10:39522.service - OpenSSH per-connection server daemon (10.200.16.10:39522). Jan 14 13:22:07.547532 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 39522 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:07.549229 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:07.554017 systemd-logind[1734]: New session 16 of user core. Jan 14 13:22:07.558493 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:22:08.039069 sshd[4864]: Connection closed by 10.200.16.10 port 39522 Jan 14 13:22:08.039977 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:08.044195 systemd[1]: sshd@13-10.200.4.13:22-10.200.16.10:39522.service: Deactivated successfully. Jan 14 13:22:08.046419 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:22:08.047299 systemd-logind[1734]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:22:08.048452 systemd-logind[1734]: Removed session 16. Jan 14 13:22:13.153673 systemd[1]: Started sshd@14-10.200.4.13:22-10.200.16.10:39530.service - OpenSSH per-connection server daemon (10.200.16.10:39530). Jan 14 13:22:13.759413 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 39530 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:13.760906 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:13.765545 systemd-logind[1734]: New session 17 of user core. Jan 14 13:22:13.771536 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:22:14.247711 sshd[4877]: Connection closed by 10.200.16.10 port 39530 Jan 14 13:22:14.248759 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:14.252871 systemd[1]: sshd@14-10.200.4.13:22-10.200.16.10:39530.service: Deactivated successfully. Jan 14 13:22:14.254946 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:22:14.255827 systemd-logind[1734]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:22:14.256913 systemd-logind[1734]: Removed session 17. Jan 14 13:22:14.360680 systemd[1]: Started sshd@15-10.200.4.13:22-10.200.16.10:39536.service - OpenSSH per-connection server daemon (10.200.16.10:39536). Jan 14 13:22:14.976523 sshd[4888]: Accepted publickey for core from 10.200.16.10 port 39536 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:14.977899 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:14.983098 systemd-logind[1734]: New session 18 of user core. Jan 14 13:22:14.987557 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:22:15.606566 sshd[4890]: Connection closed by 10.200.16.10 port 39536 Jan 14 13:22:15.607700 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:15.610385 systemd[1]: sshd@15-10.200.4.13:22-10.200.16.10:39536.service: Deactivated successfully. Jan 14 13:22:15.612955 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:22:15.614525 systemd-logind[1734]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:22:15.615708 systemd-logind[1734]: Removed session 18. Jan 14 13:22:15.721808 systemd[1]: Started sshd@16-10.200.4.13:22-10.200.16.10:39540.service - OpenSSH per-connection server daemon (10.200.16.10:39540). Jan 14 13:22:16.326469 sshd[4898]: Accepted publickey for core from 10.200.16.10 port 39540 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:16.328007 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:16.332953 systemd-logind[1734]: New session 19 of user core. Jan 14 13:22:16.342544 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:22:18.273948 sshd[4900]: Connection closed by 10.200.16.10 port 39540 Jan 14 13:22:18.274746 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:18.277753 systemd[1]: sshd@16-10.200.4.13:22-10.200.16.10:39540.service: Deactivated successfully. Jan 14 13:22:18.279976 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:22:18.282104 systemd-logind[1734]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:22:18.283457 systemd-logind[1734]: Removed session 19. Jan 14 13:22:18.381813 systemd[1]: Started sshd@17-10.200.4.13:22-10.200.16.10:47954.service - OpenSSH per-connection server daemon (10.200.16.10:47954). Jan 14 13:22:18.997246 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 47954 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:18.998804 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:19.003735 systemd-logind[1734]: New session 20 of user core. Jan 14 13:22:19.007528 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:22:19.615312 sshd[4918]: Connection closed by 10.200.16.10 port 47954 Jan 14 13:22:19.616272 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:19.621218 systemd-logind[1734]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:22:19.621823 systemd[1]: sshd@17-10.200.4.13:22-10.200.16.10:47954.service: Deactivated successfully. Jan 14 13:22:19.623802 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:22:19.625244 systemd-logind[1734]: Removed session 20. Jan 14 13:22:19.732759 systemd[1]: Started sshd@18-10.200.4.13:22-10.200.16.10:47970.service - OpenSSH per-connection server daemon (10.200.16.10:47970). Jan 14 13:22:20.343042 sshd[4927]: Accepted publickey for core from 10.200.16.10 port 47970 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:20.344553 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:20.348866 systemd-logind[1734]: New session 21 of user core. Jan 14 13:22:20.353518 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:22:20.832921 sshd[4929]: Connection closed by 10.200.16.10 port 47970 Jan 14 13:22:20.833807 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:20.837654 systemd[1]: sshd@18-10.200.4.13:22-10.200.16.10:47970.service: Deactivated successfully. Jan 14 13:22:20.839798 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:22:20.841009 systemd-logind[1734]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:22:20.842257 systemd-logind[1734]: Removed session 21. Jan 14 13:22:25.945661 systemd[1]: Started sshd@19-10.200.4.13:22-10.200.16.10:52334.service - OpenSSH per-connection server daemon (10.200.16.10:52334). Jan 14 13:22:26.563215 sshd[4945]: Accepted publickey for core from 10.200.16.10 port 52334 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:26.564863 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:26.569413 systemd-logind[1734]: New session 22 of user core. Jan 14 13:22:26.576525 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:22:27.050046 sshd[4947]: Connection closed by 10.200.16.10 port 52334 Jan 14 13:22:27.050942 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:27.055549 systemd[1]: sshd@19-10.200.4.13:22-10.200.16.10:52334.service: Deactivated successfully. Jan 14 13:22:27.056466 systemd-logind[1734]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:22:27.059129 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:22:27.061317 systemd-logind[1734]: Removed session 22. Jan 14 13:22:32.163715 systemd[1]: Started sshd@20-10.200.4.13:22-10.200.16.10:52336.service - OpenSSH per-connection server daemon (10.200.16.10:52336). Jan 14 13:22:32.796879 sshd[4958]: Accepted publickey for core from 10.200.16.10 port 52336 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:32.797618 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:32.803183 systemd-logind[1734]: New session 23 of user core. Jan 14 13:22:32.810539 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:22:33.292853 sshd[4961]: Connection closed by 10.200.16.10 port 52336 Jan 14 13:22:33.293606 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:33.297388 systemd[1]: sshd@20-10.200.4.13:22-10.200.16.10:52336.service: Deactivated successfully. Jan 14 13:22:33.299670 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:22:33.300910 systemd-logind[1734]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:22:33.302262 systemd-logind[1734]: Removed session 23. Jan 14 13:22:38.409969 systemd[1]: Started sshd@21-10.200.4.13:22-10.200.16.10:41028.service - OpenSSH per-connection server daemon (10.200.16.10:41028). Jan 14 13:22:39.023944 sshd[4972]: Accepted publickey for core from 10.200.16.10 port 41028 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:39.025241 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:39.030047 systemd-logind[1734]: New session 24 of user core. Jan 14 13:22:39.033519 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:22:39.518164 sshd[4974]: Connection closed by 10.200.16.10 port 41028 Jan 14 13:22:39.519889 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:39.522567 systemd[1]: sshd@21-10.200.4.13:22-10.200.16.10:41028.service: Deactivated successfully. Jan 14 13:22:39.525111 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:22:39.526777 systemd-logind[1734]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:22:39.528191 systemd-logind[1734]: Removed session 24. Jan 14 13:22:39.627721 systemd[1]: Started sshd@22-10.200.4.13:22-10.200.16.10:41044.service - OpenSSH per-connection server daemon (10.200.16.10:41044). Jan 14 13:22:40.237217 sshd[4986]: Accepted publickey for core from 10.200.16.10 port 41044 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:40.238957 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:40.243478 systemd-logind[1734]: New session 25 of user core. Jan 14 13:22:40.249524 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:22:41.944469 kubelet[3399]: I0114 13:22:41.944373 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9h6th" podStartSLOduration=168.944335312 podStartE2EDuration="2m48.944335312s" podCreationTimestamp="2025-01-14 13:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:20:18.269165326 +0000 UTC m=+39.301165311" watchObservedRunningTime="2025-01-14 13:22:41.944335312 +0000 UTC m=+182.976335297" Jan 14 13:22:41.966398 containerd[1761]: time="2025-01-14T13:22:41.965025751Z" level=info msg="StopContainer for \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\" with timeout 30 (s)" Jan 14 13:22:41.972191 systemd[1]: run-containerd-runc-k8s.io-2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99-runc.qfa8OQ.mount: Deactivated successfully. Jan 14 13:22:41.976748 containerd[1761]: time="2025-01-14T13:22:41.976685441Z" level=info msg="Stop container \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\" with signal terminated" Jan 14 13:22:41.988105 containerd[1761]: time="2025-01-14T13:22:41.988058927Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:22:41.992945 systemd[1]: cri-containerd-3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a.scope: Deactivated successfully. Jan 14 13:22:41.996620 containerd[1761]: time="2025-01-14T13:22:41.996548166Z" level=info msg="StopContainer for \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\" with timeout 2 (s)" Jan 14 13:22:41.997064 containerd[1761]: time="2025-01-14T13:22:41.996999273Z" level=info msg="Stop container \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\" with signal terminated" Jan 14 13:22:42.010005 systemd-networkd[1501]: lxc_health: Link DOWN Jan 14 13:22:42.010015 systemd-networkd[1501]: lxc_health: Lost carrier Jan 14 13:22:42.033946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a-rootfs.mount: Deactivated successfully. Jan 14 13:22:42.035001 systemd[1]: cri-containerd-2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99.scope: Deactivated successfully. Jan 14 13:22:42.036460 systemd[1]: cri-containerd-2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99.scope: Consumed 7.264s CPU time. Jan 14 13:22:42.058469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99-rootfs.mount: Deactivated successfully. Jan 14 13:22:42.097900 containerd[1761]: time="2025-01-14T13:22:42.097810021Z" level=info msg="shim disconnected" id=2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99 namespace=k8s.io Jan 14 13:22:42.097900 containerd[1761]: time="2025-01-14T13:22:42.097895122Z" level=warning msg="cleaning up after shim disconnected" id=2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99 namespace=k8s.io Jan 14 13:22:42.097900 containerd[1761]: time="2025-01-14T13:22:42.097906123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:42.098723 containerd[1761]: time="2025-01-14T13:22:42.098492832Z" level=info msg="shim disconnected" id=3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a namespace=k8s.io Jan 14 13:22:42.098723 containerd[1761]: time="2025-01-14T13:22:42.098536033Z" level=warning msg="cleaning up after shim disconnected" id=3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a namespace=k8s.io Jan 14 13:22:42.098723 containerd[1761]: time="2025-01-14T13:22:42.098546133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:42.125369 containerd[1761]: time="2025-01-14T13:22:42.125312471Z" level=info msg="StopContainer for \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\" returns successfully" Jan 14 13:22:42.126080 containerd[1761]: time="2025-01-14T13:22:42.126037982Z" level=info msg="StopPodSandbox for \"6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed\"" Jan 14 13:22:42.126406 containerd[1761]: time="2025-01-14T13:22:42.126080683Z" level=info msg="Container to stop \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:22:42.129196 containerd[1761]: time="2025-01-14T13:22:42.129008731Z" level=info msg="StopContainer for \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\" returns successfully" Jan 14 13:22:42.129087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed-shm.mount: Deactivated successfully. Jan 14 13:22:42.131111 containerd[1761]: time="2025-01-14T13:22:42.130573557Z" level=info msg="StopPodSandbox for \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\"" Jan 14 13:22:42.131111 containerd[1761]: time="2025-01-14T13:22:42.130614657Z" level=info msg="Container to stop \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:22:42.131111 containerd[1761]: time="2025-01-14T13:22:42.130657958Z" level=info msg="Container to stop \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:22:42.131111 containerd[1761]: time="2025-01-14T13:22:42.130671658Z" level=info msg="Container to stop \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:22:42.131111 containerd[1761]: time="2025-01-14T13:22:42.130685758Z" level=info msg="Container to stop \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:22:42.131111 containerd[1761]: time="2025-01-14T13:22:42.130697559Z" level=info msg="Container to stop \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:22:42.139572 systemd[1]: cri-containerd-6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed.scope: Deactivated successfully. Jan 14 13:22:42.140571 systemd[1]: cri-containerd-79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e.scope: Deactivated successfully. Jan 14 13:22:42.190577 containerd[1761]: time="2025-01-14T13:22:42.190500936Z" level=info msg="shim disconnected" id=6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed namespace=k8s.io Jan 14 13:22:42.190577 containerd[1761]: time="2025-01-14T13:22:42.190572337Z" level=warning msg="cleaning up after shim disconnected" id=6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed namespace=k8s.io Jan 14 13:22:42.190577 containerd[1761]: time="2025-01-14T13:22:42.190583238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:42.191026 containerd[1761]: time="2025-01-14T13:22:42.190830242Z" level=info msg="shim disconnected" id=79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e namespace=k8s.io Jan 14 13:22:42.191026 containerd[1761]: time="2025-01-14T13:22:42.190874742Z" level=warning msg="cleaning up after shim disconnected" id=79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e namespace=k8s.io Jan 14 13:22:42.191026 containerd[1761]: time="2025-01-14T13:22:42.190884142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:42.214105 containerd[1761]: time="2025-01-14T13:22:42.212225391Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:22:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:22:42.214105 containerd[1761]: time="2025-01-14T13:22:42.213863718Z" level=info msg="TearDown network for sandbox \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" successfully" Jan 14 13:22:42.214105 containerd[1761]: time="2025-01-14T13:22:42.213891619Z" level=info msg="StopPodSandbox for \"79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e\" returns successfully" Jan 14 13:22:42.214990 containerd[1761]: time="2025-01-14T13:22:42.214917235Z" level=info msg="TearDown network for sandbox \"6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed\" successfully" Jan 14 13:22:42.214990 containerd[1761]: time="2025-01-14T13:22:42.214947136Z" level=info msg="StopPodSandbox for \"6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed\" returns successfully" Jan 14 13:22:42.302405 kubelet[3399]: I0114 13:22:42.301930 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.302405 kubelet[3399]: I0114 13:22:42.301938 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-cgroup\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302405 kubelet[3399]: I0114 13:22:42.302002 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-xtables-lock\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302405 kubelet[3399]: I0114 13:22:42.302021 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-hubble-tls\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302405 kubelet[3399]: I0114 13:22:42.302059 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.302405 kubelet[3399]: I0114 13:22:42.302095 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-hostproc\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302815 kubelet[3399]: I0114 13:22:42.302126 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-kernel\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302815 kubelet[3399]: I0114 13:22:42.302153 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0efbfd67-e438-44c9-be8f-e424b1d930d9-clustermesh-secrets\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302815 kubelet[3399]: I0114 13:22:42.302176 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-net\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302815 kubelet[3399]: I0114 13:22:42.302194 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-lib-modules\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302815 kubelet[3399]: I0114 13:22:42.302223 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cni-path\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.302815 kubelet[3399]: I0114 13:22:42.302246 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4966ff7a-a480-4a2e-a3db-7dda051dd884-cilium-config-path\") pod \"4966ff7a-a480-4a2e-a3db-7dda051dd884\" (UID: \"4966ff7a-a480-4a2e-a3db-7dda051dd884\") " Jan 14 13:22:42.303066 kubelet[3399]: I0114 13:22:42.302266 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-bpf-maps\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.303066 kubelet[3399]: I0114 13:22:42.302289 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8d2v\" (UniqueName: \"kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-kube-api-access-p8d2v\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.303066 kubelet[3399]: I0114 13:22:42.302312 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzvgm\" (UniqueName: \"kubernetes.io/projected/4966ff7a-a480-4a2e-a3db-7dda051dd884-kube-api-access-hzvgm\") pod \"4966ff7a-a480-4a2e-a3db-7dda051dd884\" (UID: \"4966ff7a-a480-4a2e-a3db-7dda051dd884\") " Jan 14 13:22:42.303066 kubelet[3399]: I0114 13:22:42.302337 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-run\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.303066 kubelet[3399]: I0114 13:22:42.302377 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-config-path\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.303066 kubelet[3399]: I0114 13:22:42.302397 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-etc-cni-netd\") pod \"0efbfd67-e438-44c9-be8f-e424b1d930d9\" (UID: \"0efbfd67-e438-44c9-be8f-e424b1d930d9\") " Jan 14 13:22:42.303302 kubelet[3399]: I0114 13:22:42.302443 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-cgroup\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.303302 kubelet[3399]: I0114 13:22:42.302455 3399 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-xtables-lock\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.303302 kubelet[3399]: I0114 13:22:42.302483 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.303302 kubelet[3399]: I0114 13:22:42.302509 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-hostproc" (OuterVolumeSpecName: "hostproc") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.303302 kubelet[3399]: I0114 13:22:42.302528 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.306061 kubelet[3399]: I0114 13:22:42.304452 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.306061 kubelet[3399]: I0114 13:22:42.304504 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.306061 kubelet[3399]: I0114 13:22:42.304526 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cni-path" (OuterVolumeSpecName: "cni-path") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.306274 kubelet[3399]: I0114 13:22:42.306092 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.307838 kubelet[3399]: I0114 13:22:42.307801 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efbfd67-e438-44c9-be8f-e424b1d930d9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 14 13:22:42.307934 kubelet[3399]: I0114 13:22:42.307881 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:22:42.310097 kubelet[3399]: I0114 13:22:42.310063 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:22:42.311418 kubelet[3399]: I0114 13:22:42.310254 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4966ff7a-a480-4a2e-a3db-7dda051dd884-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4966ff7a-a480-4a2e-a3db-7dda051dd884" (UID: "4966ff7a-a480-4a2e-a3db-7dda051dd884"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:22:42.311692 kubelet[3399]: I0114 13:22:42.311665 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-kube-api-access-p8d2v" (OuterVolumeSpecName: "kube-api-access-p8d2v") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "kube-api-access-p8d2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:22:42.313015 kubelet[3399]: I0114 13:22:42.312978 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4966ff7a-a480-4a2e-a3db-7dda051dd884-kube-api-access-hzvgm" (OuterVolumeSpecName: "kube-api-access-hzvgm") pod "4966ff7a-a480-4a2e-a3db-7dda051dd884" (UID: "4966ff7a-a480-4a2e-a3db-7dda051dd884"). InnerVolumeSpecName "kube-api-access-hzvgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:22:42.313637 kubelet[3399]: I0114 13:22:42.313610 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0efbfd67-e438-44c9-be8f-e424b1d930d9" (UID: "0efbfd67-e438-44c9-be8f-e424b1d930d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:22:42.403521 kubelet[3399]: I0114 13:22:42.403471 3399 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0efbfd67-e438-44c9-be8f-e424b1d930d9-clustermesh-secrets\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403521 kubelet[3399]: I0114 13:22:42.403516 3399 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-net\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403521 kubelet[3399]: I0114 13:22:42.403531 3399 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-lib-modules\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403550 3399 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cni-path\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403564 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4966ff7a-a480-4a2e-a3db-7dda051dd884-cilium-config-path\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403577 3399 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hzvgm\" (UniqueName: \"kubernetes.io/projected/4966ff7a-a480-4a2e-a3db-7dda051dd884-kube-api-access-hzvgm\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403590 3399 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-bpf-maps\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403604 3399 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-p8d2v\" (UniqueName: \"kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-kube-api-access-p8d2v\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403621 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-run\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403633 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0efbfd67-e438-44c9-be8f-e424b1d930d9-cilium-config-path\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.403827 kubelet[3399]: I0114 13:22:42.403648 3399 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-etc-cni-netd\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.404076 kubelet[3399]: I0114 13:22:42.403661 3399 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0efbfd67-e438-44c9-be8f-e424b1d930d9-hubble-tls\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.404076 kubelet[3399]: I0114 13:22:42.403674 3399 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-hostproc\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.404076 kubelet[3399]: I0114 13:22:42.403686 3399 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0efbfd67-e438-44c9-be8f-e424b1d930d9-host-proc-sys-kernel\") on node \"ci-4152.2.0-a-42c09c22a8\" DevicePath \"\"" Jan 14 13:22:42.513671 kubelet[3399]: I0114 13:22:42.513541 3399 scope.go:117] "RemoveContainer" containerID="2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99" Jan 14 13:22:42.520199 containerd[1761]: time="2025-01-14T13:22:42.519986422Z" level=info msg="RemoveContainer for \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\"" Jan 14 13:22:42.525565 systemd[1]: Removed slice kubepods-besteffort-pod4966ff7a_a480_4a2e_a3db_7dda051dd884.slice - libcontainer container kubepods-besteffort-pod4966ff7a_a480_4a2e_a3db_7dda051dd884.slice. Jan 14 13:22:42.527626 systemd[1]: Removed slice kubepods-burstable-pod0efbfd67_e438_44c9_be8f_e424b1d930d9.slice - libcontainer container kubepods-burstable-pod0efbfd67_e438_44c9_be8f_e424b1d930d9.slice. Jan 14 13:22:42.527865 systemd[1]: kubepods-burstable-pod0efbfd67_e438_44c9_be8f_e424b1d930d9.slice: Consumed 7.353s CPU time. Jan 14 13:22:42.534435 containerd[1761]: time="2025-01-14T13:22:42.534270855Z" level=info msg="RemoveContainer for \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\" returns successfully" Jan 14 13:22:42.534703 kubelet[3399]: I0114 13:22:42.534678 3399 scope.go:117] "RemoveContainer" containerID="98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63" Jan 14 13:22:42.535874 containerd[1761]: time="2025-01-14T13:22:42.535822481Z" level=info msg="RemoveContainer for \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\"" Jan 14 13:22:42.545055 containerd[1761]: time="2025-01-14T13:22:42.545017231Z" level=info msg="RemoveContainer for \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\" returns successfully" Jan 14 13:22:42.545723 kubelet[3399]: I0114 13:22:42.545415 3399 scope.go:117] "RemoveContainer" containerID="565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe" Jan 14 13:22:42.547487 containerd[1761]: time="2025-01-14T13:22:42.547124266Z" level=info msg="RemoveContainer for \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\"" Jan 14 13:22:42.557144 containerd[1761]: time="2025-01-14T13:22:42.557106029Z" level=info msg="RemoveContainer for \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\" returns successfully" Jan 14 13:22:42.557334 kubelet[3399]: I0114 13:22:42.557310 3399 scope.go:117] "RemoveContainer" containerID="d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca" Jan 14 13:22:42.558408 containerd[1761]: time="2025-01-14T13:22:42.558375649Z" level=info msg="RemoveContainer for \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\"" Jan 14 13:22:42.566255 containerd[1761]: time="2025-01-14T13:22:42.566218878Z" level=info msg="RemoveContainer for \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\" returns successfully" Jan 14 13:22:42.566440 kubelet[3399]: I0114 13:22:42.566416 3399 scope.go:117] "RemoveContainer" containerID="caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549" Jan 14 13:22:42.567429 containerd[1761]: time="2025-01-14T13:22:42.567338396Z" level=info msg="RemoveContainer for \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\"" Jan 14 13:22:42.576364 containerd[1761]: time="2025-01-14T13:22:42.576330243Z" level=info msg="RemoveContainer for \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\" returns successfully" Jan 14 13:22:42.576582 kubelet[3399]: I0114 13:22:42.576528 3399 scope.go:117] "RemoveContainer" containerID="2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99" Jan 14 13:22:42.577075 kubelet[3399]: E0114 13:22:42.576866 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\": not found" containerID="2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99" Jan 14 13:22:42.577075 kubelet[3399]: I0114 13:22:42.576897 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99"} err="failed to get container status \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\": not found" Jan 14 13:22:42.577075 kubelet[3399]: I0114 13:22:42.576960 3399 scope.go:117] "RemoveContainer" containerID="98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63" Jan 14 13:22:42.577231 containerd[1761]: time="2025-01-14T13:22:42.576730149Z" level=error msg="ContainerStatus for \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e684478c580d011c3373c378011f5b2c9f0250a96962b2a559329576b67ec99\": not found" Jan 14 13:22:42.577231 containerd[1761]: time="2025-01-14T13:22:42.577200657Z" level=error msg="ContainerStatus for \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\": not found" Jan 14 13:22:42.577372 kubelet[3399]: E0114 13:22:42.577325 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\": not found" containerID="98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63" Jan 14 13:22:42.577435 kubelet[3399]: I0114 13:22:42.577383 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63"} err="failed to get container status \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\": rpc error: code = NotFound desc = an error occurred when try to find container \"98d75a6630015109f0c63520e49d07a347c5969a1059d5eee3949b53474dbd63\": not found" Jan 14 13:22:42.577435 kubelet[3399]: I0114 13:22:42.577407 3399 scope.go:117] "RemoveContainer" containerID="565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe" Jan 14 13:22:42.577640 containerd[1761]: time="2025-01-14T13:22:42.577600364Z" level=error msg="ContainerStatus for \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\": not found" Jan 14 13:22:42.577762 kubelet[3399]: E0114 13:22:42.577738 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\": not found" containerID="565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe" Jan 14 13:22:42.577836 kubelet[3399]: I0114 13:22:42.577781 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe"} err="failed to get container status \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"565b65b419ee65296473609378699ece6d03cc32c5908e600e6637de6b8455fe\": not found" Jan 14 13:22:42.577836 kubelet[3399]: I0114 13:22:42.577804 3399 scope.go:117] "RemoveContainer" containerID="d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca" Jan 14 13:22:42.578026 containerd[1761]: time="2025-01-14T13:22:42.577971570Z" level=error msg="ContainerStatus for \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\": not found" Jan 14 13:22:42.578148 kubelet[3399]: E0114 13:22:42.578127 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\": not found" containerID="d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca" Jan 14 13:22:42.578148 kubelet[3399]: I0114 13:22:42.578149 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca"} err="failed to get container status \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3365c4b8c2f39b18ade33c83261dcdfa92c951f7b0a72d7686e2088ec2a5bca\": not found" Jan 14 13:22:42.578294 kubelet[3399]: I0114 13:22:42.578167 3399 scope.go:117] "RemoveContainer" containerID="caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549" Jan 14 13:22:42.578369 containerd[1761]: time="2025-01-14T13:22:42.578325676Z" level=error msg="ContainerStatus for \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\": not found" Jan 14 13:22:42.578494 kubelet[3399]: E0114 13:22:42.578471 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\": not found" containerID="caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549" Jan 14 13:22:42.578562 kubelet[3399]: I0114 13:22:42.578496 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549"} err="failed to get container status \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\": rpc error: code = NotFound desc = an error occurred when try to find container \"caa9c662b63a99805613e92b80a04ae2546d3238c56865b32ca7ae5e49cea549\": not found" Jan 14 13:22:42.578562 kubelet[3399]: I0114 13:22:42.578515 3399 scope.go:117] "RemoveContainer" containerID="3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a" Jan 14 13:22:42.579462 containerd[1761]: time="2025-01-14T13:22:42.579442594Z" level=info msg="RemoveContainer for \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\"" Jan 14 13:22:42.591126 containerd[1761]: time="2025-01-14T13:22:42.591088184Z" level=info msg="RemoveContainer for \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\" returns successfully" Jan 14 13:22:42.591391 kubelet[3399]: I0114 13:22:42.591327 3399 scope.go:117] "RemoveContainer" containerID="3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a" Jan 14 13:22:42.591735 containerd[1761]: time="2025-01-14T13:22:42.591701694Z" level=error msg="ContainerStatus for \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\": not found" Jan 14 13:22:42.592090 kubelet[3399]: E0114 13:22:42.592017 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\": not found" containerID="3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a" Jan 14 13:22:42.592090 kubelet[3399]: I0114 13:22:42.592057 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a"} err="failed to get container status \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ae44a4afc15d8cae01c44fa08a3971ce4fa67ed3b8a4dd1acf48a1de51ed05a\": not found" Jan 14 13:22:42.964373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bc68bccdf74ed0ce286037796f50c75d6f70ce32ef97d5284e5050d09d70bed-rootfs.mount: Deactivated successfully. Jan 14 13:22:42.964705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e-rootfs.mount: Deactivated successfully. Jan 14 13:22:42.964910 systemd[1]: var-lib-kubelet-pods-4966ff7a\x2da480\x2d4a2e\x2da3db\x2d7dda051dd884-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhzvgm.mount: Deactivated successfully. Jan 14 13:22:42.965013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79f5bf3e5609b33e9816e908e562e8a96d6f30fe70fca706a59c7117b591fa7e-shm.mount: Deactivated successfully. Jan 14 13:22:42.965097 systemd[1]: var-lib-kubelet-pods-0efbfd67\x2de438\x2d44c9\x2dbe8f\x2de424b1d930d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp8d2v.mount: Deactivated successfully. Jan 14 13:22:42.965188 systemd[1]: var-lib-kubelet-pods-0efbfd67\x2de438\x2d44c9\x2dbe8f\x2de424b1d930d9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 14 13:22:42.965277 systemd[1]: var-lib-kubelet-pods-0efbfd67\x2de438\x2d44c9\x2dbe8f\x2de424b1d930d9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 14 13:22:43.069783 kubelet[3399]: I0114 13:22:43.069739 3399 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" path="/var/lib/kubelet/pods/0efbfd67-e438-44c9-be8f-e424b1d930d9/volumes" Jan 14 13:22:43.070505 kubelet[3399]: I0114 13:22:43.070477 3399 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4966ff7a-a480-4a2e-a3db-7dda051dd884" path="/var/lib/kubelet/pods/4966ff7a-a480-4a2e-a3db-7dda051dd884/volumes" Jan 14 13:22:43.988842 sshd[4988]: Connection closed by 10.200.16.10 port 41044 Jan 14 13:22:43.989894 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:43.993129 systemd[1]: sshd@22-10.200.4.13:22-10.200.16.10:41044.service: Deactivated successfully. Jan 14 13:22:43.995763 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:22:43.997961 systemd-logind[1734]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:22:43.999081 systemd-logind[1734]: Removed session 25. Jan 14 13:22:44.103704 systemd[1]: Started sshd@23-10.200.4.13:22-10.200.16.10:41056.service - OpenSSH per-connection server daemon (10.200.16.10:41056). Jan 14 13:22:44.166543 kubelet[3399]: E0114 13:22:44.166484 3399 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:22:44.718526 sshd[5149]: Accepted publickey for core from 10.200.16.10 port 41056 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:44.720030 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:44.724867 systemd-logind[1734]: New session 26 of user core. Jan 14 13:22:44.733546 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:22:45.753813 kubelet[3399]: I0114 13:22:45.753765 3399 topology_manager.go:215] "Topology Admit Handler" podUID="7691eec7-e599-4625-89eb-c18866b1539d" podNamespace="kube-system" podName="cilium-gw8xp" Jan 14 13:22:45.755688 kubelet[3399]: E0114 13:22:45.753835 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4966ff7a-a480-4a2e-a3db-7dda051dd884" containerName="cilium-operator" Jan 14 13:22:45.755688 kubelet[3399]: E0114 13:22:45.753848 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" containerName="cilium-agent" Jan 14 13:22:45.755688 kubelet[3399]: E0114 13:22:45.753858 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" containerName="mount-cgroup" Jan 14 13:22:45.755688 kubelet[3399]: E0114 13:22:45.753866 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" containerName="apply-sysctl-overwrites" Jan 14 13:22:45.755688 kubelet[3399]: E0114 13:22:45.753873 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" containerName="mount-bpf-fs" Jan 14 13:22:45.755688 kubelet[3399]: E0114 13:22:45.753881 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" containerName="clean-cilium-state" Jan 14 13:22:45.755688 kubelet[3399]: I0114 13:22:45.753912 3399 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efbfd67-e438-44c9-be8f-e424b1d930d9" containerName="cilium-agent" Jan 14 13:22:45.755688 kubelet[3399]: I0114 13:22:45.753920 3399 memory_manager.go:354] "RemoveStaleState removing state" podUID="4966ff7a-a480-4a2e-a3db-7dda051dd884" containerName="cilium-operator" Jan 14 13:22:45.765887 systemd[1]: Created slice kubepods-burstable-pod7691eec7_e599_4625_89eb_c18866b1539d.slice - libcontainer container kubepods-burstable-pod7691eec7_e599_4625_89eb_c18866b1539d.slice. Jan 14 13:22:45.824548 kubelet[3399]: I0114 13:22:45.824187 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-hostproc\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824548 kubelet[3399]: I0114 13:22:45.824227 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-etc-cni-netd\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824548 kubelet[3399]: I0114 13:22:45.824247 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7691eec7-e599-4625-89eb-c18866b1539d-hubble-tls\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824548 kubelet[3399]: I0114 13:22:45.824262 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-cni-path\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824548 kubelet[3399]: I0114 13:22:45.824277 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-host-proc-sys-kernel\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824548 kubelet[3399]: I0114 13:22:45.824291 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7691eec7-e599-4625-89eb-c18866b1539d-cilium-config-path\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824854 kubelet[3399]: I0114 13:22:45.824305 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-lib-modules\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824854 kubelet[3399]: I0114 13:22:45.824320 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-host-proc-sys-net\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824854 kubelet[3399]: I0114 13:22:45.824336 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-bpf-maps\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824854 kubelet[3399]: I0114 13:22:45.824363 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-cilium-cgroup\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824854 kubelet[3399]: I0114 13:22:45.824384 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-xtables-lock\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.824854 kubelet[3399]: I0114 13:22:45.824406 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6qm7\" (UniqueName: \"kubernetes.io/projected/7691eec7-e599-4625-89eb-c18866b1539d-kube-api-access-g6qm7\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.825013 kubelet[3399]: I0114 13:22:45.824432 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7691eec7-e599-4625-89eb-c18866b1539d-cilium-run\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.825013 kubelet[3399]: I0114 13:22:45.824453 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7691eec7-e599-4625-89eb-c18866b1539d-clustermesh-secrets\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.825013 kubelet[3399]: I0114 13:22:45.824476 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7691eec7-e599-4625-89eb-c18866b1539d-cilium-ipsec-secrets\") pod \"cilium-gw8xp\" (UID: \"7691eec7-e599-4625-89eb-c18866b1539d\") " pod="kube-system/cilium-gw8xp" Jan 14 13:22:45.844247 sshd[5151]: Connection closed by 10.200.16.10 port 41056 Jan 14 13:22:45.845585 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:45.848486 systemd[1]: sshd@23-10.200.4.13:22-10.200.16.10:41056.service: Deactivated successfully. Jan 14 13:22:45.850975 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:22:45.853006 systemd-logind[1734]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:22:45.854295 systemd-logind[1734]: Removed session 26. Jan 14 13:22:45.970272 systemd[1]: Started sshd@24-10.200.4.13:22-10.200.16.10:47192.service - OpenSSH per-connection server daemon (10.200.16.10:47192). Jan 14 13:22:46.074186 containerd[1761]: time="2025-01-14T13:22:46.073705007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gw8xp,Uid:7691eec7-e599-4625-89eb-c18866b1539d,Namespace:kube-system,Attempt:0,}" Jan 14 13:22:46.123128 containerd[1761]: time="2025-01-14T13:22:46.122814128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:22:46.123128 containerd[1761]: time="2025-01-14T13:22:46.122863228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:22:46.123128 containerd[1761]: time="2025-01-14T13:22:46.122878028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:46.123128 containerd[1761]: time="2025-01-14T13:22:46.122961129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:22:46.147554 systemd[1]: Started cri-containerd-ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f.scope - libcontainer container ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f. Jan 14 13:22:46.171103 containerd[1761]: time="2025-01-14T13:22:46.171063738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gw8xp,Uid:7691eec7-e599-4625-89eb-c18866b1539d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\"" Jan 14 13:22:46.176384 containerd[1761]: time="2025-01-14T13:22:46.176215003Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:22:46.216935 containerd[1761]: time="2025-01-14T13:22:46.216886617Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693\"" Jan 14 13:22:46.217705 containerd[1761]: time="2025-01-14T13:22:46.217599326Z" level=info msg="StartContainer for \"c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693\"" Jan 14 13:22:46.244530 systemd[1]: Started cri-containerd-c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693.scope - libcontainer container c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693. Jan 14 13:22:46.275104 containerd[1761]: time="2025-01-14T13:22:46.274918951Z" level=info msg="StartContainer for \"c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693\" returns successfully" Jan 14 13:22:46.282717 systemd[1]: cri-containerd-c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693.scope: Deactivated successfully. Jan 14 13:22:46.353131 containerd[1761]: time="2025-01-14T13:22:46.353069539Z" level=info msg="shim disconnected" id=c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693 namespace=k8s.io Jan 14 13:22:46.353131 containerd[1761]: time="2025-01-14T13:22:46.353125139Z" level=warning msg="cleaning up after shim disconnected" id=c06e59185253ed8176232abfab4af1b6c71f243ef7988c2a4218db7563023693 namespace=k8s.io Jan 14 13:22:46.353131 containerd[1761]: time="2025-01-14T13:22:46.353137340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:46.532148 containerd[1761]: time="2025-01-14T13:22:46.531900800Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:22:46.578547 containerd[1761]: time="2025-01-14T13:22:46.578502189Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655\"" Jan 14 13:22:46.579305 containerd[1761]: time="2025-01-14T13:22:46.579127697Z" level=info msg="StartContainer for \"ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655\"" Jan 14 13:22:46.585423 sshd[5164]: Accepted publickey for core from 10.200.16.10 port 47192 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:46.586557 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:46.593253 systemd-logind[1734]: New session 27 of user core. Jan 14 13:22:46.603188 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:22:46.614614 systemd[1]: Started cri-containerd-ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655.scope - libcontainer container ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655. Jan 14 13:22:46.647094 containerd[1761]: time="2025-01-14T13:22:46.644991229Z" level=info msg="StartContainer for \"ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655\" returns successfully" Jan 14 13:22:46.650975 systemd[1]: cri-containerd-ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655.scope: Deactivated successfully. Jan 14 13:22:46.682926 containerd[1761]: time="2025-01-14T13:22:46.682862908Z" level=info msg="shim disconnected" id=ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655 namespace=k8s.io Jan 14 13:22:46.682926 containerd[1761]: time="2025-01-14T13:22:46.682921009Z" level=warning msg="cleaning up after shim disconnected" id=ae31160bfc140d143bc4d4e9d362be1079a049311d098f5524a2fa8fa9771655 namespace=k8s.io Jan 14 13:22:46.682926 containerd[1761]: time="2025-01-14T13:22:46.682934109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:47.014449 sshd[5287]: Connection closed by 10.200.16.10 port 47192 Jan 14 13:22:47.016881 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:47.019476 systemd[1]: sshd@24-10.200.4.13:22-10.200.16.10:47192.service: Deactivated successfully. Jan 14 13:22:47.022044 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:22:47.024151 systemd-logind[1734]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:22:47.025588 systemd-logind[1734]: Removed session 27. Jan 14 13:22:47.128022 systemd[1]: Started sshd@25-10.200.4.13:22-10.200.16.10:47200.service - OpenSSH per-connection server daemon (10.200.16.10:47200). Jan 14 13:22:47.534308 containerd[1761]: time="2025-01-14T13:22:47.534052170Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:22:47.574565 containerd[1761]: time="2025-01-14T13:22:47.574518681Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16\"" Jan 14 13:22:47.576521 containerd[1761]: time="2025-01-14T13:22:47.575124989Z" level=info msg="StartContainer for \"81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16\"" Jan 14 13:22:47.611511 systemd[1]: Started cri-containerd-81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16.scope - libcontainer container 81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16. Jan 14 13:22:47.646893 systemd[1]: cri-containerd-81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16.scope: Deactivated successfully. Jan 14 13:22:47.648754 containerd[1761]: time="2025-01-14T13:22:47.648714019Z" level=info msg="StartContainer for \"81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16\" returns successfully" Jan 14 13:22:47.680434 containerd[1761]: time="2025-01-14T13:22:47.680368820Z" level=info msg="shim disconnected" id=81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16 namespace=k8s.io Jan 14 13:22:47.680434 containerd[1761]: time="2025-01-14T13:22:47.680424820Z" level=warning msg="cleaning up after shim disconnected" id=81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16 namespace=k8s.io Jan 14 13:22:47.680434 containerd[1761]: time="2025-01-14T13:22:47.680434820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:47.736035 sshd[5338]: Accepted publickey for core from 10.200.16.10 port 47200 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:22:47.737867 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:22:47.744194 systemd-logind[1734]: New session 28 of user core. Jan 14 13:22:47.746532 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:22:47.932678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81195eba142b03662e0b4ca593a8d94172a1d7c955689e2339c753c0f1809b16-rootfs.mount: Deactivated successfully. Jan 14 13:22:48.538337 containerd[1761]: time="2025-01-14T13:22:48.538292125Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:22:48.573603 containerd[1761]: time="2025-01-14T13:22:48.573551188Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0\"" Jan 14 13:22:48.574324 containerd[1761]: time="2025-01-14T13:22:48.574087296Z" level=info msg="StartContainer for \"a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0\"" Jan 14 13:22:48.614545 systemd[1]: Started cri-containerd-a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0.scope - libcontainer container a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0. Jan 14 13:22:48.638210 systemd[1]: cri-containerd-a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0.scope: Deactivated successfully. Jan 14 13:22:48.642563 containerd[1761]: time="2025-01-14T13:22:48.642444287Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7691eec7_e599_4625_89eb_c18866b1539d.slice/cri-containerd-a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0.scope/memory.events\": no such file or directory" Jan 14 13:22:48.645824 containerd[1761]: time="2025-01-14T13:22:48.645709039Z" level=info msg="StartContainer for \"a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0\" returns successfully" Jan 14 13:22:48.686165 containerd[1761]: time="2025-01-14T13:22:48.685921481Z" level=info msg="shim disconnected" id=a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0 namespace=k8s.io Jan 14 13:22:48.686165 containerd[1761]: time="2025-01-14T13:22:48.685983882Z" level=warning msg="cleaning up after shim disconnected" id=a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0 namespace=k8s.io Jan 14 13:22:48.686165 containerd[1761]: time="2025-01-14T13:22:48.685992882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:22:48.932675 systemd[1]: run-containerd-runc-k8s.io-a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0-runc.mAWZlv.mount: Deactivated successfully. Jan 14 13:22:48.932791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7413ff2b785a7933eb8efecdea981dfe7866ac1830e805d851ee7fe6ab429b0-rootfs.mount: Deactivated successfully. Jan 14 13:22:49.167766 kubelet[3399]: E0114 13:22:49.167705 3399 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:22:49.543970 containerd[1761]: time="2025-01-14T13:22:49.543925970Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:22:49.593522 containerd[1761]: time="2025-01-14T13:22:49.593471860Z" level=info msg="CreateContainer within sandbox \"ffe3be60bfc9499d1c50a38e66530429c243b7331138dc65f020353fe5855f3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18af1d37e5e9ce7531a73007ab0be8724d1fbf7520889971445a5104878d19fd\"" Jan 14 13:22:49.594205 containerd[1761]: time="2025-01-14T13:22:49.594170671Z" level=info msg="StartContainer for \"18af1d37e5e9ce7531a73007ab0be8724d1fbf7520889971445a5104878d19fd\"" Jan 14 13:22:49.632516 systemd[1]: Started cri-containerd-18af1d37e5e9ce7531a73007ab0be8724d1fbf7520889971445a5104878d19fd.scope - libcontainer container 18af1d37e5e9ce7531a73007ab0be8724d1fbf7520889971445a5104878d19fd. Jan 14 13:22:49.665374 containerd[1761]: time="2025-01-14T13:22:49.665146304Z" level=info msg="StartContainer for \"18af1d37e5e9ce7531a73007ab0be8724d1fbf7520889971445a5104878d19fd\" returns successfully" Jan 14 13:22:50.192379 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 14 13:22:52.965289 kubelet[3399]: I0114 13:22:52.964554 3399 setters.go:580] "Node became not ready" node="ci-4152.2.0-a-42c09c22a8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-14T13:22:52Z","lastTransitionTime":"2025-01-14T13:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 14 13:22:53.104383 systemd-networkd[1501]: lxc_health: Link UP Jan 14 13:22:53.112518 systemd-networkd[1501]: lxc_health: Gained carrier Jan 14 13:22:54.105062 kubelet[3399]: I0114 13:22:54.104988 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gw8xp" podStartSLOduration=9.104962269 podStartE2EDuration="9.104962269s" podCreationTimestamp="2025-01-14 13:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:22:50.573929103 +0000 UTC m=+191.605928988" watchObservedRunningTime="2025-01-14 13:22:54.104962269 +0000 UTC m=+195.136962154" Jan 14 13:22:54.435076 systemd[1]: run-containerd-runc-k8s.io-18af1d37e5e9ce7531a73007ab0be8724d1fbf7520889971445a5104878d19fd-runc.edCS5L.mount: Deactivated successfully. Jan 14 13:22:54.511602 systemd-networkd[1501]: lxc_health: Gained IPv6LL Jan 14 13:22:58.929678 sshd[5397]: Connection closed by 10.200.16.10 port 47200 Jan 14 13:22:58.930663 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Jan 14 13:22:58.935239 systemd[1]: sshd@25-10.200.4.13:22-10.200.16.10:47200.service: Deactivated successfully. Jan 14 13:22:58.938224 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:22:58.939196 systemd-logind[1734]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:22:58.940374 systemd-logind[1734]: Removed session 28.