Jan 29 11:08:41.909739 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 29 11:08:41.909769 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:08:41.909799 kernel: BIOS-provided physical RAM map: Jan 29 11:08:41.909807 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:08:41.909816 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:08:41.909824 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:08:41.909834 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:08:41.909844 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:08:41.909852 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 11:08:41.909861 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 11:08:41.909870 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 29 11:08:41.909882 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 11:08:41.909891 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 11:08:41.909899 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 11:08:41.909910 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 11:08:41.909919 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:08:41.909932 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 11:08:41.909941 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 11:08:41.909950 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 11:08:41.909960 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 11:08:41.909969 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 11:08:41.909978 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:08:41.909988 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:08:41.909997 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:08:41.910006 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 11:08:41.910015 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:08:41.910024 kernel: NX (Execute Disable) protection: active Jan 29 11:08:41.910037 kernel: APIC: Static calls initialized Jan 29 11:08:41.910046 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 11:08:41.910056 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 11:08:41.910392 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 11:08:41.910401 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 11:08:41.910408 kernel: extended physical RAM map: Jan 29 11:08:41.910415 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:08:41.910422 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:08:41.910438 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:08:41.910445 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:08:41.910452 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:08:41.910459 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 11:08:41.910471 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 11:08:41.910490 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 29 11:08:41.910497 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 29 11:08:41.910505 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 29 11:08:41.910512 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 29 11:08:41.910519 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 29 11:08:41.910539 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 11:08:41.910546 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 11:08:41.910553 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 11:08:41.910561 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 11:08:41.910568 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:08:41.910576 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 11:08:41.910583 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 11:08:41.910591 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 11:08:41.910598 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 11:08:41.910608 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 11:08:41.910615 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:08:41.910623 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:08:41.910630 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:08:41.910638 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 11:08:41.910645 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:08:41.910652 kernel: efi: EFI v2.7 by EDK II Jan 29 11:08:41.910659 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 29 11:08:41.910667 kernel: random: crng init done Jan 29 11:08:41.910674 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 29 11:08:41.910682 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 29 11:08:41.910689 kernel: secureboot: Secure boot disabled Jan 29 11:08:41.910699 kernel: SMBIOS 2.8 present. Jan 29 11:08:41.910706 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 29 11:08:41.910713 kernel: Hypervisor detected: KVM Jan 29 11:08:41.910721 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:08:41.910728 kernel: kvm-clock: using sched offset of 2759448354 cycles Jan 29 11:08:41.910736 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:08:41.910744 kernel: tsc: Detected 2794.748 MHz processor Jan 29 11:08:41.910752 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:08:41.910760 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:08:41.910767 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 29 11:08:41.910798 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:08:41.910806 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:08:41.910813 kernel: Using GB pages for direct mapping Jan 29 11:08:41.910821 kernel: ACPI: Early table checksum verification disabled Jan 29 11:08:41.910829 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:08:41.910836 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:08:41.910844 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:41.910851 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:41.910859 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:08:41.910869 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:41.910877 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:41.910884 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:41.910892 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:41.910899 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:08:41.910907 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:08:41.910914 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:08:41.910921 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:08:41.910931 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:08:41.910939 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:08:41.910946 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:08:41.910953 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:08:41.910961 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:08:41.910968 kernel: No NUMA configuration found Jan 29 11:08:41.910976 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 29 11:08:41.910983 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 29 11:08:41.910991 kernel: Zone ranges: Jan 29 11:08:41.910998 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:08:41.911008 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 29 11:08:41.911015 kernel: Normal empty Jan 29 11:08:41.911023 kernel: Movable zone start for each node Jan 29 11:08:41.911031 kernel: Early memory node ranges Jan 29 11:08:41.911038 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:08:41.911046 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:08:41.911053 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:08:41.911061 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 29 11:08:41.911077 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 29 11:08:41.911088 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 29 11:08:41.911095 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 29 11:08:41.911102 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 29 11:08:41.911110 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 29 11:08:41.911117 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:08:41.911125 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:08:41.911140 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:08:41.911150 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:08:41.911157 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 29 11:08:41.911224 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 29 11:08:41.911238 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 11:08:41.911248 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 29 11:08:41.911260 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 29 11:08:41.911268 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:08:41.911276 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:08:41.911284 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:08:41.911291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:08:41.911301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:08:41.911309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:08:41.911317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:08:41.911327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:08:41.911336 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:08:41.911344 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:08:41.911351 kernel: TSC deadline timer available Jan 29 11:08:41.911359 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:08:41.911377 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:08:41.911387 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:08:41.911395 kernel: kvm-guest: setup PV sched yield Jan 29 11:08:41.911404 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 29 11:08:41.911414 kernel: Booting paravirtualized kernel on KVM Jan 29 11:08:41.911424 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:08:41.911432 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:08:41.911440 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:08:41.911448 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:08:41.911455 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:08:41.911466 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:08:41.911473 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:08:41.911484 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:08:41.911495 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:08:41.911503 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:08:41.911511 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:08:41.911519 kernel: Fallback order for Node 0: 0 Jan 29 11:08:41.911526 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 29 11:08:41.911536 kernel: Policy zone: DMA32 Jan 29 11:08:41.911544 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:08:41.911552 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 177824K reserved, 0K cma-reserved) Jan 29 11:08:41.911562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:08:41.911573 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 11:08:41.911583 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:08:41.911594 kernel: Dynamic Preempt: voluntary Jan 29 11:08:41.911604 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:08:41.911616 kernel: rcu: RCU event tracing is enabled. Jan 29 11:08:41.911628 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:08:41.911638 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:08:41.911648 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:08:41.911657 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:08:41.911666 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:08:41.911675 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:08:41.911684 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:08:41.911705 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:08:41.911714 kernel: Console: colour dummy device 80x25 Jan 29 11:08:41.911723 kernel: printk: console [ttyS0] enabled Jan 29 11:08:41.911735 kernel: ACPI: Core revision 20230628 Jan 29 11:08:41.911744 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:08:41.911754 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:08:41.911763 kernel: x2apic enabled Jan 29 11:08:41.911772 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:08:41.911795 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:08:41.911815 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:08:41.911824 kernel: kvm-guest: setup PV IPIs Jan 29 11:08:41.911833 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:08:41.911846 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:08:41.911855 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 11:08:41.911864 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:08:41.911873 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:08:41.911883 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:08:41.911892 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:08:41.911901 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:08:41.911910 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:08:41.911919 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:08:41.911931 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:08:41.911940 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:08:41.911949 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:08:41.911959 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:08:41.911968 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:08:41.911978 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:08:41.911987 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:08:41.911997 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:08:41.912008 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:08:41.912017 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:08:41.912027 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:08:41.912036 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:08:41.912045 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:08:41.912054 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:08:41.912063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:08:41.912072 kernel: landlock: Up and running. Jan 29 11:08:41.912082 kernel: SELinux: Initializing. Jan 29 11:08:41.912093 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:08:41.912102 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:08:41.912112 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:08:41.912121 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:08:41.912130 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:08:41.912139 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:08:41.912149 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:08:41.912158 kernel: ... version: 0 Jan 29 11:08:41.912167 kernel: ... bit width: 48 Jan 29 11:08:41.912179 kernel: ... generic registers: 6 Jan 29 11:08:41.912188 kernel: ... value mask: 0000ffffffffffff Jan 29 11:08:41.912197 kernel: ... max period: 00007fffffffffff Jan 29 11:08:41.912206 kernel: ... fixed-purpose events: 0 Jan 29 11:08:41.912215 kernel: ... event mask: 000000000000003f Jan 29 11:08:41.912224 kernel: signal: max sigframe size: 1776 Jan 29 11:08:41.912233 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:08:41.912243 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:08:41.912252 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:08:41.912264 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:08:41.912273 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:08:41.912282 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:08:41.912291 kernel: smpboot: Max logical packages: 1 Jan 29 11:08:41.912300 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 11:08:41.912309 kernel: devtmpfs: initialized Jan 29 11:08:41.912318 kernel: x86/mm: Memory block size: 128MB Jan 29 11:08:41.912327 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:08:41.912337 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:08:41.912349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 29 11:08:41.912358 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:08:41.912377 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 29 11:08:41.912386 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:08:41.912396 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:08:41.912405 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:08:41.912414 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:08:41.912423 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:08:41.912432 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:08:41.912445 kernel: audit: type=2000 audit(1738148921.580:1): state=initialized audit_enabled=0 res=1 Jan 29 11:08:41.912454 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:08:41.912472 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:08:41.912482 kernel: cpuidle: using governor menu Jan 29 11:08:41.912492 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:08:41.912502 kernel: dca service started, version 1.12.1 Jan 29 11:08:41.912512 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 11:08:41.912523 kernel: PCI: Using configuration type 1 for base access Jan 29 11:08:41.912534 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:08:41.912550 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:08:41.912561 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:08:41.912584 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:08:41.912594 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:08:41.912605 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:08:41.912615 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:08:41.912626 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:08:41.912636 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:08:41.912646 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:08:41.912660 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:08:41.912671 kernel: ACPI: Interpreter enabled Jan 29 11:08:41.912681 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:08:41.912691 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:08:41.912702 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:08:41.912713 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:08:41.912723 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:08:41.912734 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:08:41.913003 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:08:41.913212 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:08:41.913360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:08:41.913386 kernel: PCI host bridge to bus 0000:00 Jan 29 11:08:41.913547 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:08:41.913676 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:08:41.913813 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:08:41.913970 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 29 11:08:41.914091 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 29 11:08:41.914211 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 29 11:08:41.914343 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:08:41.914520 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:08:41.914679 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:08:41.914846 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:08:41.915023 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:08:41.915155 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:08:41.915284 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:08:41.915436 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:08:41.915649 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:08:41.915796 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:08:41.915956 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:08:41.916146 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 29 11:08:41.916279 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:08:41.916412 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:08:41.916550 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:08:41.916695 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 29 11:08:41.916844 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:08:41.916975 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:08:41.917111 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:08:41.917246 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 29 11:08:41.917377 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:08:41.917506 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:08:41.917647 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:08:41.917825 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:08:41.917954 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:08:41.918075 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:08:41.918213 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:08:41.918355 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:08:41.918380 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:08:41.918389 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:08:41.918407 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:08:41.918419 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:08:41.918427 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:08:41.918435 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:08:41.918443 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:08:41.918451 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:08:41.918459 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:08:41.918466 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:08:41.918474 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:08:41.918482 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:08:41.918493 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:08:41.918501 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:08:41.918509 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:08:41.918516 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:08:41.918524 kernel: iommu: Default domain type: Translated Jan 29 11:08:41.918532 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:08:41.918540 kernel: efivars: Registered efivars operations Jan 29 11:08:41.918547 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:08:41.918555 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:08:41.918569 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:08:41.918578 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 29 11:08:41.918588 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 29 11:08:41.918598 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 29 11:08:41.918608 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 29 11:08:41.918619 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 29 11:08:41.918629 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 29 11:08:41.918640 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 29 11:08:41.918814 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:08:41.918956 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:08:41.919078 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:08:41.919089 kernel: vgaarb: loaded Jan 29 11:08:41.919097 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:08:41.919106 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:08:41.919114 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:08:41.919122 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:08:41.919130 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:08:41.919142 kernel: pnp: PnP ACPI init Jan 29 11:08:41.919293 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 29 11:08:41.919307 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:08:41.919317 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:08:41.919327 kernel: NET: Registered PF_INET protocol family Jan 29 11:08:41.919357 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:08:41.919380 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:08:41.919390 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:08:41.919403 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:08:41.919412 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:08:41.919422 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:08:41.919432 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:08:41.919442 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:08:41.919451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:08:41.919461 kernel: NET: Registered PF_XDP protocol family Jan 29 11:08:41.919644 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:08:41.919833 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:08:41.919974 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:08:41.920107 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:08:41.920237 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:08:41.920355 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 29 11:08:41.920486 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 29 11:08:41.920604 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 29 11:08:41.920617 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:08:41.920627 kernel: Initialise system trusted keyrings Jan 29 11:08:41.920641 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:08:41.920650 kernel: Key type asymmetric registered Jan 29 11:08:41.920660 kernel: Asymmetric key parser 'x509' registered Jan 29 11:08:41.920669 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:08:41.920679 kernel: io scheduler mq-deadline registered Jan 29 11:08:41.920689 kernel: io scheduler kyber registered Jan 29 11:08:41.920698 kernel: io scheduler bfq registered Jan 29 11:08:41.920720 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:08:41.920731 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:08:41.920757 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:08:41.920770 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:08:41.920792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:08:41.920802 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:08:41.920812 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:08:41.920822 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:08:41.920834 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:08:41.921002 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:08:41.921141 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:08:41.921155 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:08:41.921274 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:08:41 UTC (1738148921) Jan 29 11:08:41.921408 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:08:41.921421 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:08:41.921435 kernel: efifb: probing for efifb Jan 29 11:08:41.921447 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 29 11:08:41.921457 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 29 11:08:41.921467 kernel: efifb: scrolling: redraw Jan 29 11:08:41.921476 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 11:08:41.921486 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:08:41.921496 kernel: fb0: EFI VGA frame buffer device Jan 29 11:08:41.921506 kernel: pstore: Using crash dump compression: deflate Jan 29 11:08:41.921515 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:08:41.921525 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:08:41.921537 kernel: Segment Routing with IPv6 Jan 29 11:08:41.921547 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:08:41.921556 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:08:41.921566 kernel: Key type dns_resolver registered Jan 29 11:08:41.921576 kernel: IPI shorthand broadcast: enabled Jan 29 11:08:41.921586 kernel: sched_clock: Marking stable (634002724, 167905124)->(820789273, -18881425) Jan 29 11:08:41.921595 kernel: registered taskstats version 1 Jan 29 11:08:41.921605 kernel: Loading compiled-in X.509 certificates Jan 29 11:08:41.921615 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 29 11:08:41.921627 kernel: Key type .fscrypt registered Jan 29 11:08:41.921636 kernel: Key type fscrypt-provisioning registered Jan 29 11:08:41.921646 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:08:41.921656 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:08:41.921666 kernel: ima: No architecture policies found Jan 29 11:08:41.921675 kernel: clk: Disabling unused clocks Jan 29 11:08:41.921685 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:08:41.921694 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:08:41.921707 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:08:41.921717 kernel: Run /init as init process Jan 29 11:08:41.921726 kernel: with arguments: Jan 29 11:08:41.921736 kernel: /init Jan 29 11:08:41.921745 kernel: with environment: Jan 29 11:08:41.921755 kernel: HOME=/ Jan 29 11:08:41.921764 kernel: TERM=linux Jan 29 11:08:41.921774 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:08:41.921800 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:08:41.921815 systemd[1]: Detected virtualization kvm. Jan 29 11:08:41.921826 systemd[1]: Detected architecture x86-64. Jan 29 11:08:41.921836 systemd[1]: Running in initrd. Jan 29 11:08:41.921846 systemd[1]: No hostname configured, using default hostname. Jan 29 11:08:41.921856 systemd[1]: Hostname set to . Jan 29 11:08:41.921867 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:08:41.921877 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:08:41.921888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:41.921900 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:41.921911 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:08:41.921922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:08:41.921932 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:08:41.921943 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:08:41.921955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:08:41.921968 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:08:41.921979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:41.921989 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:41.921999 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:08:41.922009 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:08:41.922020 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:08:41.922030 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:08:41.922040 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:08:41.922050 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:08:41.922064 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:08:41.922074 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:08:41.922094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:41.922104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:41.922115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:41.922125 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:08:41.922136 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:08:41.922148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:08:41.922163 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:08:41.922172 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:08:41.922190 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:08:41.922207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:08:41.922227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:41.922237 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:08:41.922254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:41.922289 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:08:41.922337 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 11:08:41.922371 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:08:41.922458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:41.922467 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:08:41.922476 systemd-journald[194]: Journal started Jan 29 11:08:41.922507 systemd-journald[194]: Runtime Journal (/run/log/journal/648748f5961047a09c86ec41da7cf163) is 6.0M, max 48.2M, 42.2M free. Jan 29 11:08:41.924403 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 11:08:41.934189 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:41.937266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:08:41.940209 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:08:41.945903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:08:41.949016 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:41.954384 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:41.957287 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:08:41.957319 kernel: Bridge firewalling registered Jan 29 11:08:41.957876 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 11:08:41.970972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:08:41.972282 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:41.974386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:41.980121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:41.983850 dracut-cmdline[224]: dracut-dracut-053 Jan 29 11:08:41.987643 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:08:41.993894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:41.996438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:08:42.036419 systemd-resolved[248]: Positive Trust Anchors: Jan 29 11:08:42.036435 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:08:42.036466 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:08:42.039170 systemd-resolved[248]: Defaulting to hostname 'linux'. Jan 29 11:08:42.040245 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:08:42.047261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:42.107832 kernel: SCSI subsystem initialized Jan 29 11:08:42.118801 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:08:42.129829 kernel: iscsi: registered transport (tcp) Jan 29 11:08:42.150944 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:08:42.151015 kernel: QLogic iSCSI HBA Driver Jan 29 11:08:42.208365 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:08:42.218962 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:08:42.245880 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:08:42.245961 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:08:42.247045 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:08:42.292831 kernel: raid6: avx2x4 gen() 27278 MB/s Jan 29 11:08:42.309841 kernel: raid6: avx2x2 gen() 20677 MB/s Jan 29 11:08:42.326952 kernel: raid6: avx2x1 gen() 21716 MB/s Jan 29 11:08:42.327017 kernel: raid6: using algorithm avx2x4 gen() 27278 MB/s Jan 29 11:08:42.344908 kernel: raid6: .... xor() 7267 MB/s, rmw enabled Jan 29 11:08:42.344985 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:08:42.367819 kernel: xor: automatically using best checksumming function avx Jan 29 11:08:42.526816 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:08:42.542212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:08:42.553974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:42.569064 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 29 11:08:42.574461 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:42.585989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:08:42.602012 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 29 11:08:42.637713 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:08:42.653957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:08:42.725041 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:42.735046 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:08:42.746636 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:08:42.750451 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:08:42.751392 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:42.751737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:08:42.761898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:08:42.772812 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:08:42.799897 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:08:42.800070 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:08:42.800088 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:08:42.800110 kernel: GPT:9289727 != 19775487 Jan 29 11:08:42.800126 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:08:42.800147 kernel: GPT:9289727 != 19775487 Jan 29 11:08:42.800165 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:08:42.800186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:08:42.774201 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:08:42.793948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:08:42.794077 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:42.804465 kernel: libata version 3.00 loaded. Jan 29 11:08:42.796357 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:42.797534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:42.797742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:42.799010 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:42.811992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:42.820051 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:08:42.820104 kernel: AES CTR mode by8 optimization enabled Jan 29 11:08:42.823805 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:08:42.853991 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:08:42.854020 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:08:42.854184 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:08:42.854375 kernel: scsi host0: ahci Jan 29 11:08:42.854557 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (474) Jan 29 11:08:42.854574 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (478) Jan 29 11:08:42.854588 kernel: scsi host1: ahci Jan 29 11:08:42.854791 kernel: scsi host2: ahci Jan 29 11:08:42.855179 kernel: scsi host3: ahci Jan 29 11:08:42.855423 kernel: scsi host4: ahci Jan 29 11:08:42.855596 kernel: scsi host5: ahci Jan 29 11:08:42.855768 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:08:42.855795 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:08:42.855806 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:08:42.855816 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:08:42.855831 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:08:42.855842 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:08:42.828033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:42.828162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:42.850275 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:08:42.861025 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:08:42.870681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:08:42.876209 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:08:42.877575 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:08:42.893946 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:08:42.896095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:42.904686 disk-uuid[554]: Primary Header is updated. Jan 29 11:08:42.904686 disk-uuid[554]: Secondary Entries is updated. Jan 29 11:08:42.904686 disk-uuid[554]: Secondary Header is updated. Jan 29 11:08:42.908802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:08:42.912830 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:08:42.914100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:42.920623 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:42.942213 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:43.166806 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:08:43.166885 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:08:43.167819 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:08:43.168810 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:08:43.169814 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:08:43.170811 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:08:43.170832 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:08:43.171459 kernel: ata3.00: applying bridge limits Jan 29 11:08:43.172816 kernel: ata3.00: configured for UDMA/100 Jan 29 11:08:43.174814 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:08:43.231841 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:08:43.245723 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:08:43.245747 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:08:43.920824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:08:43.921028 disk-uuid[556]: The operation has completed successfully. Jan 29 11:08:43.953099 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:08:43.953251 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:08:43.982051 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:08:43.985658 sh[594]: Success Jan 29 11:08:43.998805 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:08:44.040262 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:08:44.054142 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:08:44.064534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:08:44.088923 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 29 11:08:44.089014 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:08:44.089030 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:08:44.090725 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:08:44.091089 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:08:44.099811 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:08:44.101882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:08:44.113147 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:08:44.114654 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:08:44.148458 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:08:44.148546 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:08:44.148564 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:08:44.153845 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:08:44.165163 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:08:44.167145 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:08:44.263648 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:08:44.278999 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:08:44.304267 systemd-networkd[772]: lo: Link UP Jan 29 11:08:44.304281 systemd-networkd[772]: lo: Gained carrier Jan 29 11:08:44.307440 systemd-networkd[772]: Enumeration completed Jan 29 11:08:44.307635 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:08:44.308527 systemd[1]: Reached target network.target - Network. Jan 29 11:08:44.312998 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:44.313007 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:44.318515 systemd-networkd[772]: eth0: Link UP Jan 29 11:08:44.318526 systemd-networkd[772]: eth0: Gained carrier Jan 29 11:08:44.318538 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:44.320620 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:08:44.346314 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:08:44.350883 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:08:44.404334 ignition[776]: Ignition 2.20.0 Jan 29 11:08:44.404347 ignition[776]: Stage: fetch-offline Jan 29 11:08:44.404395 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:44.404407 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:44.404531 ignition[776]: parsed url from cmdline: "" Jan 29 11:08:44.404536 ignition[776]: no config URL provided Jan 29 11:08:44.404543 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:08:44.404554 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:08:44.404591 ignition[776]: op(1): [started] loading QEMU firmware config module Jan 29 11:08:44.404598 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:08:44.438329 ignition[776]: op(1): [finished] loading QEMU firmware config module Jan 29 11:08:44.477145 ignition[776]: parsing config with SHA512: d1d2b84f56f8cc5648382c5183a9e2c3c0fcb8d1fe20c987c21bb0eddb071fe0ff563a94fbb8d079c1f5bc8a3da9aaf8ddb4b95be38532bcb73d734a7ad52566 Jan 29 11:08:44.481977 unknown[776]: fetched base config from "system" Jan 29 11:08:44.482809 unknown[776]: fetched user config from "qemu" Jan 29 11:08:44.483891 ignition[776]: fetch-offline: fetch-offline passed Jan 29 11:08:44.483498 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.46 Jan 29 11:08:44.484531 ignition[776]: Ignition finished successfully Jan 29 11:08:44.483506 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jan 29 11:08:44.486972 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:08:44.488759 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:08:44.497173 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:08:44.512092 ignition[787]: Ignition 2.20.0 Jan 29 11:08:44.512104 ignition[787]: Stage: kargs Jan 29 11:08:44.512267 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:44.512278 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:44.516208 ignition[787]: kargs: kargs passed Jan 29 11:08:44.516265 ignition[787]: Ignition finished successfully Jan 29 11:08:44.520918 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:08:44.538099 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:08:44.550374 ignition[796]: Ignition 2.20.0 Jan 29 11:08:44.550389 ignition[796]: Stage: disks Jan 29 11:08:44.550583 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:44.550598 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:44.551662 ignition[796]: disks: disks passed Jan 29 11:08:44.551715 ignition[796]: Ignition finished successfully Jan 29 11:08:44.557326 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:08:44.558046 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:08:44.559741 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:08:44.562067 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:08:44.564175 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:08:44.564674 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:08:44.579123 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:08:44.593227 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:08:44.698985 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:08:44.711926 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:08:44.804924 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 29 11:08:44.805748 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:08:44.807351 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:08:44.816927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:08:44.819004 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:08:44.819807 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:08:44.819859 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:08:44.832722 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Jan 29 11:08:44.832753 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:08:44.832768 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:08:44.832795 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:08:44.832809 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:08:44.819886 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:08:44.827818 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:08:44.833848 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:08:44.837111 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:08:44.874169 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:08:44.877911 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:08:44.881934 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:08:44.885941 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:08:44.997675 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:08:45.008932 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:08:45.011171 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:08:45.019802 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:08:45.041878 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:08:45.044111 ignition[927]: INFO : Ignition 2.20.0 Jan 29 11:08:45.044111 ignition[927]: INFO : Stage: mount Jan 29 11:08:45.044111 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:45.044111 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:45.044111 ignition[927]: INFO : mount: mount passed Jan 29 11:08:45.044111 ignition[927]: INFO : Ignition finished successfully Jan 29 11:08:45.045037 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:08:45.052060 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:08:45.088313 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:08:45.099338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:08:45.114851 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Jan 29 11:08:45.118125 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:08:45.118217 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:08:45.118233 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:08:45.123842 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:08:45.126232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:08:45.156572 ignition[957]: INFO : Ignition 2.20.0 Jan 29 11:08:45.156572 ignition[957]: INFO : Stage: files Jan 29 11:08:45.158535 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:45.158535 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:45.161029 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:08:45.162613 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:08:45.162613 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:08:45.166902 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:08:45.168401 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:08:45.169893 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:08:45.169044 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 11:08:45.172858 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:08:45.175110 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:08:45.220229 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:08:45.329610 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:08:45.331838 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:08:45.331838 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:08:45.759977 systemd-networkd[772]: eth0: Gained IPv6LL Jan 29 11:08:45.836281 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:08:46.021931 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:08:46.021931 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:08:46.025762 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:08:46.457511 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:08:47.039930 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:08:47.039930 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:08:47.043831 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:08:47.109109 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:08:47.114919 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:08:47.116834 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:08:47.116834 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:08:47.116834 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:08:47.116834 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:08:47.116834 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:08:47.116834 ignition[957]: INFO : files: files passed Jan 29 11:08:47.116834 ignition[957]: INFO : Ignition finished successfully Jan 29 11:08:47.118869 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:08:47.129958 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:08:47.132079 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:08:47.134871 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:08:47.134987 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:08:47.143205 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:08:47.146328 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:47.146328 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:47.150214 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:47.154842 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:08:47.155273 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:08:47.167222 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:08:47.198004 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:08:47.199359 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:08:47.202750 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:08:47.205482 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:08:47.208105 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:08:47.221057 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:08:47.235542 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:08:47.251038 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:08:47.262657 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:47.264238 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:47.268770 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:08:47.269631 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:08:47.269889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:08:47.271284 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:08:47.272138 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:08:47.272553 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:08:47.273166 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:08:47.273580 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:08:47.274179 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:08:47.274576 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:08:47.275250 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:08:47.276564 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:08:47.277166 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:08:47.277511 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:08:47.277710 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:08:47.278523 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:47.279126 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:47.329258 ignition[1011]: INFO : Ignition 2.20.0 Jan 29 11:08:47.329258 ignition[1011]: INFO : Stage: umount Jan 29 11:08:47.279485 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:08:47.334166 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:47.334166 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:47.334166 ignition[1011]: INFO : umount: umount passed Jan 29 11:08:47.334166 ignition[1011]: INFO : Ignition finished successfully Jan 29 11:08:47.279681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:47.280070 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:08:47.280220 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:08:47.281101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:08:47.281258 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:08:47.281871 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:08:47.282361 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:08:47.285913 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:47.286344 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:08:47.286685 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:08:47.287261 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:08:47.287363 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:08:47.287844 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:08:47.287927 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:08:47.288459 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:08:47.288571 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:08:47.289269 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:08:47.289377 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:08:47.313180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:08:47.315100 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:08:47.315300 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:47.320559 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:08:47.322492 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:08:47.322790 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:47.325370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:08:47.325491 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:08:47.332046 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:08:47.332230 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:08:47.334498 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:08:47.334642 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:08:47.338245 systemd[1]: Stopped target network.target - Network. Jan 29 11:08:47.339579 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:08:47.339672 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:08:47.341905 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:08:47.341954 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:08:47.344202 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:08:47.344260 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:08:47.347158 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:08:47.347208 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:08:47.347689 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:08:47.348066 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:08:47.349821 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:08:47.352836 systemd-networkd[772]: eth0: DHCPv6 lease lost Jan 29 11:08:47.355195 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:08:47.355414 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:08:47.357944 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:08:47.358060 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:08:47.362731 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:08:47.362926 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:47.371924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:08:47.373819 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:08:47.373904 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:08:47.376428 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:08:47.376491 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:47.378640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:08:47.378700 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:47.380150 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:08:47.380211 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:47.382742 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:47.397279 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:08:47.397438 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:08:47.406131 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:08:47.406405 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:47.408741 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:08:47.408836 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:47.410463 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:08:47.410513 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:47.412620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:08:47.412678 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:08:47.415496 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:08:47.415578 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:08:47.417673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:08:47.417741 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:47.430098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:08:47.431440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:08:47.431515 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:47.433939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:47.433995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:47.439121 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:08:47.439336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:08:47.613918 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:08:47.614116 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:08:47.617452 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:08:47.618867 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:08:47.618966 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:08:47.630169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:08:47.640999 systemd[1]: Switching root. Jan 29 11:08:47.668989 systemd-journald[194]: Journal stopped Jan 29 11:08:49.472127 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 11:08:49.472218 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:08:49.472234 kernel: SELinux: policy capability open_perms=1 Jan 29 11:08:49.472246 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:08:49.472262 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:08:49.472279 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:08:49.472291 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:08:49.472302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:08:49.472318 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:08:49.472330 kernel: audit: type=1403 audit(1738148928.454:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:08:49.472343 systemd[1]: Successfully loaded SELinux policy in 71.363ms. Jan 29 11:08:49.472367 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.942ms. Jan 29 11:08:49.472381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:08:49.472393 systemd[1]: Detected virtualization kvm. Jan 29 11:08:49.472406 systemd[1]: Detected architecture x86-64. Jan 29 11:08:49.472419 systemd[1]: Detected first boot. Jan 29 11:08:49.472439 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:08:49.472451 zram_generator::config[1055]: No configuration found. Jan 29 11:08:49.472465 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:08:49.472478 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:08:49.472490 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:08:49.472503 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:08:49.472516 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:08:49.472529 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:08:49.472541 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:08:49.472557 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:08:49.472569 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:08:49.472582 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:08:49.472595 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:08:49.472608 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:08:49.472621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:49.472633 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:49.472646 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:08:49.472660 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:08:49.472674 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:08:49.472686 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:08:49.472700 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:08:49.472712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:49.472725 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:08:49.472737 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:08:49.472750 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:08:49.472764 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:08:49.472841 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:49.472858 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:08:49.472878 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:08:49.472890 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:08:49.472903 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:08:49.472915 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:08:49.472927 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:49.472940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:49.472955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:49.472968 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:08:49.472980 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:08:49.472993 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:08:49.473005 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:08:49.473017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:08:49.473030 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:08:49.473042 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:08:49.473054 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:08:49.473070 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:08:49.473082 systemd[1]: Reached target machines.target - Containers. Jan 29 11:08:49.473095 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:08:49.473107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:49.473119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:08:49.473132 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:08:49.473144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:49.473156 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:08:49.473182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:49.473199 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:08:49.473212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:49.473224 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:08:49.473238 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:08:49.473251 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:08:49.473263 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:08:49.473276 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:08:49.473288 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:08:49.473303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:08:49.473316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:08:49.473328 kernel: fuse: init (API version 7.39) Jan 29 11:08:49.473339 kernel: loop: module loaded Jan 29 11:08:49.473351 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:08:49.473382 systemd-journald[1118]: Collecting audit messages is disabled. Jan 29 11:08:49.473406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:08:49.473419 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:08:49.473434 systemd-journald[1118]: Journal started Jan 29 11:08:49.473457 systemd-journald[1118]: Runtime Journal (/run/log/journal/648748f5961047a09c86ec41da7cf163) is 6.0M, max 48.2M, 42.2M free. Jan 29 11:08:49.193258 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:08:49.213912 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:08:49.214439 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:08:49.477157 systemd[1]: Stopped verity-setup.service. Jan 29 11:08:49.477198 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:08:49.482280 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:08:49.483338 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:08:49.484826 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:08:49.486282 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:08:49.487840 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:08:49.489332 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:08:49.490876 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:08:49.492585 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:49.494665 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:08:49.494994 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:08:49.518266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:49.518488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:49.519810 kernel: ACPI: bus type drm_connector registered Jan 29 11:08:49.520848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:49.521071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:49.522705 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:08:49.522906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:08:49.524359 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:08:49.524555 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:08:49.526552 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:49.526755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:49.528491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:49.529989 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:08:49.531832 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:08:49.546007 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:08:49.555906 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:08:49.560414 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:08:49.562113 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:08:49.562156 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:08:49.564979 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:08:49.568069 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:08:49.578905 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:08:49.580425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:49.585462 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:08:49.588143 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:08:49.590529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:08:49.592373 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:08:49.593915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:08:49.595754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:49.602948 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:08:49.609655 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:08:49.615245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:49.617751 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:08:49.620132 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:08:49.621237 systemd-journald[1118]: Time spent on flushing to /var/log/journal/648748f5961047a09c86ec41da7cf163 is 22.780ms for 1049 entries. Jan 29 11:08:49.621237 systemd-journald[1118]: System Journal (/var/log/journal/648748f5961047a09c86ec41da7cf163) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:08:49.870033 systemd-journald[1118]: Received client request to flush runtime journal. Jan 29 11:08:49.870116 kernel: loop0: detected capacity change from 0 to 141000 Jan 29 11:08:49.870148 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:08:49.870203 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 11:08:49.625855 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:08:49.653037 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:08:49.663579 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:08:49.676896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:49.697551 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:08:49.701254 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:08:49.749179 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:08:49.751379 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:08:49.797273 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:08:49.808013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:08:49.864934 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 29 11:08:49.864955 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 29 11:08:49.873712 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:08:49.875995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:49.902808 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 11:08:49.964327 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:08:49.965310 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:08:49.987810 kernel: loop3: detected capacity change from 0 to 141000 Jan 29 11:08:50.053601 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 11:08:50.070802 kernel: loop5: detected capacity change from 0 to 138184 Jan 29 11:08:50.080450 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:08:50.081219 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 29 11:08:50.211408 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:08:50.211426 systemd[1]: Reloading... Jan 29 11:08:50.321813 zram_generator::config[1218]: No configuration found. Jan 29 11:08:50.546313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:08:50.623224 systemd[1]: Reloading finished in 411 ms. Jan 29 11:08:50.732156 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:08:50.747084 systemd[1]: Starting ensure-sysext.service... Jan 29 11:08:50.750545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:08:50.767139 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:08:50.767160 systemd[1]: Reloading... Jan 29 11:08:50.788085 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:08:50.788620 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:08:50.790726 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:08:50.791796 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 29 11:08:50.791916 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 29 11:08:50.798930 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:08:50.798950 systemd-tmpfiles[1258]: Skipping /boot Jan 29 11:08:50.848912 zram_generator::config[1282]: No configuration found. Jan 29 11:08:50.854653 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:08:50.854678 systemd-tmpfiles[1258]: Skipping /boot Jan 29 11:08:50.902617 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:08:51.024802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:08:51.092514 systemd[1]: Reloading finished in 324 ms. Jan 29 11:08:51.115237 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:08:51.117437 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:08:51.130733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:51.145241 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:08:51.149071 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:08:51.153059 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:08:51.158285 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:08:51.171095 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:51.175351 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:08:51.179511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:08:51.179726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:51.182566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:51.185198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:51.196170 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:51.197793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:51.201091 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:08:51.203147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:08:51.204238 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:08:51.206313 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:51.206607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:51.208848 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 29 11:08:51.212502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:51.212696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:51.217756 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:51.218485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:51.228169 augenrules[1359]: No rules Jan 29 11:08:51.229744 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:08:51.230136 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:08:51.233030 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:08:51.241764 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:08:51.255571 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:08:51.257213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:51.260158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:51.263915 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:08:51.269084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:51.274689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:51.277020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:51.286287 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:08:51.288996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:08:51.290387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:51.292498 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:08:51.295565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:51.295825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:51.299591 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:51.299817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:51.314354 systemd[1]: Finished ensure-sysext.service. Jan 29 11:08:51.316437 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:08:51.320107 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:08:51.320316 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:08:51.360285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:51.360496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:51.400170 augenrules[1366]: /sbin/augenrules: No change Jan 29 11:08:51.408314 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:08:51.412812 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) Jan 29 11:08:51.432011 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:08:51.443972 augenrules[1425]: No rules Jan 29 11:08:51.447746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:08:51.451464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:08:51.451608 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:08:51.457039 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 11:08:51.457054 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:08:51.457092 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:08:51.463676 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 29 11:08:51.471064 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:08:51.472593 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:08:51.472908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:08:51.475373 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:08:51.475975 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:08:51.484829 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:08:51.490826 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:08:51.504377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:51.534560 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:08:51.541107 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:08:51.541347 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:08:51.541384 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:08:51.541631 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:08:51.593207 systemd-networkd[1420]: lo: Link UP Jan 29 11:08:51.593223 systemd-networkd[1420]: lo: Gained carrier Jan 29 11:08:51.594952 systemd-networkd[1420]: Enumeration completed Jan 29 11:08:51.595394 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:51.595405 systemd-networkd[1420]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:51.596224 systemd-networkd[1420]: eth0: Link UP Jan 29 11:08:51.596229 systemd-networkd[1420]: eth0: Gained carrier Jan 29 11:08:51.596241 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:51.634472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:51.637156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:08:51.643314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:08:51.646239 systemd[1]: Reached target network.target - Network. Jan 29 11:08:51.654754 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:08:51.662220 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:08:51.682028 systemd-networkd[1420]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:08:51.684922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:51.685675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:51.688331 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:08:52.447751 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:08:52.447852 systemd-timesyncd[1432]: Initial clock synchronization to Wed 2025-01-29 11:08:52.447594 UTC. Jan 29 11:08:52.447953 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 29 11:08:52.457807 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:08:52.458359 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:08:52.461239 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:08:52.473588 kernel: kvm_amd: TSC scaling supported Jan 29 11:08:52.473689 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:08:52.473705 kernel: kvm_amd: Nested Paging enabled Jan 29 11:08:52.474102 kernel: kvm_amd: LBR virtualization supported Jan 29 11:08:52.475542 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:08:52.475600 kernel: kvm_amd: Virtual GIF supported Jan 29 11:08:52.476148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:52.504788 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:08:52.546524 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:08:52.555051 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:08:52.557377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:52.570406 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:08:52.611468 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:08:52.613283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:52.614652 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:08:52.616062 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:08:52.617547 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:08:52.619350 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:08:52.620685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:08:52.622074 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:08:52.623487 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:08:52.623527 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:08:52.624575 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:08:52.626531 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:08:52.629655 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:08:52.641382 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:08:52.644543 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:08:52.646596 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:08:52.648099 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:08:52.649381 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:08:52.650610 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:08:52.650639 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:08:52.651714 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:08:52.654160 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:08:52.658783 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:08:52.659152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:08:52.662718 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:08:52.665906 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:08:52.667163 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:08:52.669203 jq[1464]: false Jan 29 11:08:52.669753 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:08:52.675995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:08:52.681437 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:08:52.690136 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:08:52.692497 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:08:52.693706 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:08:52.698999 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:08:52.701596 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:08:52.704368 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:08:52.707220 extend-filesystems[1465]: Found loop3 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found loop4 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found loop5 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found sr0 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda1 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda2 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda3 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found usr Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda4 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda6 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda7 Jan 29 11:08:52.708354 extend-filesystems[1465]: Found vda9 Jan 29 11:08:52.708354 extend-filesystems[1465]: Checking size of /dev/vda9 Jan 29 11:08:52.707385 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:08:52.730751 dbus-daemon[1463]: [system] SELinux support is enabled Jan 29 11:08:52.708603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:08:52.732625 jq[1477]: true Jan 29 11:08:52.711348 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:08:52.713148 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:08:52.726424 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:08:52.727862 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:08:52.732327 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:08:52.733233 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:08:52.745344 update_engine[1474]: I20250129 11:08:52.745222 1474 main.cc:92] Flatcar Update Engine starting Jan 29 11:08:52.752058 update_engine[1474]: I20250129 11:08:52.746779 1474 update_check_scheduler.cc:74] Next update check in 8m29s Jan 29 11:08:52.756668 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:08:52.756772 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:08:52.758936 jq[1492]: true Jan 29 11:08:52.759487 extend-filesystems[1465]: Resized partition /dev/vda9 Jan 29 11:08:52.767746 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:08:52.764605 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:08:52.764634 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:08:52.790814 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1373) Jan 29 11:08:52.794837 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:08:52.795729 tar[1483]: linux-amd64/helm Jan 29 11:08:52.834806 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:08:52.863287 systemd-logind[1471]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:08:52.863325 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:08:52.872030 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:08:52.872224 systemd-logind[1471]: New seat seat0. Jan 29 11:08:52.886862 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:08:52.894154 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:08:52.898039 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:08:52.952459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:08:53.036261 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:08:53.041092 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). Jan 29 11:08:53.042796 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:08:53.043810 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:08:53.054047 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:08:53.054470 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:08:53.059681 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:08:53.114012 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:08:53.126145 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:08:53.131733 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:08:53.133222 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:08:53.214607 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:08:53.214607 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:08:53.214607 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:08:53.240281 extend-filesystems[1465]: Resized filesystem in /dev/vda9 Jan 29 11:08:53.215560 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:08:53.215849 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:08:53.290772 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:08:53.293694 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:08:53.296245 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:08:53.302534 sshd[1532]: Connection closed by authenticating user core 10.0.0.1 port 33890 [preauth] Jan 29 11:08:53.304983 systemd[1]: sshd@0-10.0.0.46:22-10.0.0.1:33890.service: Deactivated successfully. Jan 29 11:08:53.409031 containerd[1488]: time="2025-01-29T11:08:53.408896416Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:08:53.457682 containerd[1488]: time="2025-01-29T11:08:53.457546913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:53.460543 containerd[1488]: time="2025-01-29T11:08:53.460448865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:53.460543 containerd[1488]: time="2025-01-29T11:08:53.460505952Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:08:53.460543 containerd[1488]: time="2025-01-29T11:08:53.460534345Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:08:53.460869 containerd[1488]: time="2025-01-29T11:08:53.460837925Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:08:53.460911 containerd[1488]: time="2025-01-29T11:08:53.460871588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461083 containerd[1488]: time="2025-01-29T11:08:53.460989970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461083 containerd[1488]: time="2025-01-29T11:08:53.461019936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461383 containerd[1488]: time="2025-01-29T11:08:53.461330699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461383 containerd[1488]: time="2025-01-29T11:08:53.461363130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461463 containerd[1488]: time="2025-01-29T11:08:53.461385732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461463 containerd[1488]: time="2025-01-29T11:08:53.461404587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461652 containerd[1488]: time="2025-01-29T11:08:53.461539631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:53.461960 containerd[1488]: time="2025-01-29T11:08:53.461912179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:53.462182 containerd[1488]: time="2025-01-29T11:08:53.462110050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:53.462182 containerd[1488]: time="2025-01-29T11:08:53.462141008Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:08:53.462350 containerd[1488]: time="2025-01-29T11:08:53.462275821Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:08:53.462394 containerd[1488]: time="2025-01-29T11:08:53.462369036Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:08:53.549643 containerd[1488]: time="2025-01-29T11:08:53.548987196Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:08:53.549643 containerd[1488]: time="2025-01-29T11:08:53.549135745Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:08:53.549643 containerd[1488]: time="2025-01-29T11:08:53.549161152Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:08:53.549643 containerd[1488]: time="2025-01-29T11:08:53.549199304Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:08:53.549643 containerd[1488]: time="2025-01-29T11:08:53.549223028Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:08:53.549643 containerd[1488]: time="2025-01-29T11:08:53.549525556Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.549957185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550147883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550169453Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550189831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550207905Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550224446Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550241548Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550262357Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550282595Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550299407Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550315617Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550352526Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550391990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.551782 containerd[1488]: time="2025-01-29T11:08:53.550410936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550427878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550446683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550466109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550485867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550501826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550520051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550538846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550559475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550577849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550595492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550611512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550632351Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550675733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550694939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552197 containerd[1488]: time="2025-01-29T11:08:53.550713473Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550801068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550827617Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550843277Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550865248Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550891026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550914530Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550929058Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:08:53.552589 containerd[1488]: time="2025-01-29T11:08:53.550945549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:08:53.552841 containerd[1488]: time="2025-01-29T11:08:53.551450295Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:08:53.552841 containerd[1488]: time="2025-01-29T11:08:53.551533411Z" level=info msg="Connect containerd service" Jan 29 11:08:53.552841 containerd[1488]: time="2025-01-29T11:08:53.551599545Z" level=info msg="using legacy CRI server" Jan 29 11:08:53.552841 containerd[1488]: time="2025-01-29T11:08:53.551611297Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:08:53.553397 containerd[1488]: time="2025-01-29T11:08:53.553364285Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:08:53.554550 containerd[1488]: time="2025-01-29T11:08:53.554492510Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:08:53.554862 containerd[1488]: time="2025-01-29T11:08:53.554741858Z" level=info msg="Start subscribing containerd event" Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555213563Z" level=info msg="Start recovering state" Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555098607Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555384353Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555407396Z" level=info msg="Start event monitor" Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555447211Z" level=info msg="Start snapshots syncer" Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555466587Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555497776Z" level=info msg="Start streaming server" Jan 29 11:08:53.558256 containerd[1488]: time="2025-01-29T11:08:53.555623511Z" level=info msg="containerd successfully booted in 0.153242s" Jan 29 11:08:53.556022 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:08:54.331686 systemd-networkd[1420]: eth0: Gained IPv6LL Jan 29 11:08:54.342516 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:08:54.347268 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:08:54.363451 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:08:54.372506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:54.406583 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:08:54.449164 tar[1483]: linux-amd64/LICENSE Jan 29 11:08:54.449164 tar[1483]: linux-amd64/README.md Jan 29 11:08:54.464918 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:08:54.474285 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:08:54.474593 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:08:54.482175 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:08:54.489092 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:08:57.165993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:57.172527 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:08:57.175234 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:08:57.179836 systemd[1]: Startup finished in 766ms (kernel) + 6.719s (initrd) + 8.038s (userspace) = 15.525s. Jan 29 11:08:57.208784 agetty[1543]: failed to open credentials directory Jan 29 11:08:57.210076 agetty[1544]: failed to open credentials directory Jan 29 11:08:59.166014 kubelet[1578]: E0129 11:08:59.165924 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:08:59.178423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:08:59.178682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:08:59.179126 systemd[1]: kubelet.service: Consumed 3.109s CPU time. Jan 29 11:09:03.340517 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:59262.service - OpenSSH per-connection server daemon (10.0.0.1:59262). Jan 29 11:09:03.403282 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 59262 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:09:03.405949 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:03.430430 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:09:03.431931 systemd-logind[1471]: New session 1 of user core. Jan 29 11:09:03.455273 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:09:03.484425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:09:03.503395 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:09:03.521617 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:09:03.744291 systemd[1597]: Queued start job for default target default.target. Jan 29 11:09:03.762755 systemd[1597]: Created slice app.slice - User Application Slice. Jan 29 11:09:03.762825 systemd[1597]: Reached target paths.target - Paths. Jan 29 11:09:03.762843 systemd[1597]: Reached target timers.target - Timers. Jan 29 11:09:03.771121 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:09:03.808071 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:09:03.809243 systemd[1597]: Reached target sockets.target - Sockets. Jan 29 11:09:03.809267 systemd[1597]: Reached target basic.target - Basic System. Jan 29 11:09:03.809350 systemd[1597]: Reached target default.target - Main User Target. Jan 29 11:09:03.809401 systemd[1597]: Startup finished in 277ms. Jan 29 11:09:03.810207 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:09:03.826174 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:09:03.925374 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:59276.service - OpenSSH per-connection server daemon (10.0.0.1:59276). Jan 29 11:09:04.008341 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 59276 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:09:04.011342 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:04.033456 systemd-logind[1471]: New session 2 of user core. Jan 29 11:09:04.043215 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:09:04.123794 sshd[1610]: Connection closed by 10.0.0.1 port 59276 Jan 29 11:09:04.126068 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:04.143160 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:59276.service: Deactivated successfully. Jan 29 11:09:04.150326 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:09:04.153110 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:09:04.175505 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:59280.service - OpenSSH per-connection server daemon (10.0.0.1:59280). Jan 29 11:09:04.178944 systemd-logind[1471]: Removed session 2. Jan 29 11:09:04.265937 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 59280 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:09:04.268653 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:04.286279 systemd-logind[1471]: New session 3 of user core. Jan 29 11:09:04.300120 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:09:04.364816 sshd[1617]: Connection closed by 10.0.0.1 port 59280 Jan 29 11:09:04.366471 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:04.382311 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:59280.service: Deactivated successfully. Jan 29 11:09:04.389464 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:09:04.397961 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:09:04.409287 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:59296.service - OpenSSH per-connection server daemon (10.0.0.1:59296). Jan 29 11:09:04.409976 systemd-logind[1471]: Removed session 3. Jan 29 11:09:04.459581 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 59296 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:09:04.461109 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:04.475278 systemd-logind[1471]: New session 4 of user core. Jan 29 11:09:04.488128 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:09:04.572275 sshd[1624]: Connection closed by 10.0.0.1 port 59296 Jan 29 11:09:04.561394 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:04.578042 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:59296.service: Deactivated successfully. Jan 29 11:09:04.580540 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:09:04.584236 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:09:04.589911 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:59310.service - OpenSSH per-connection server daemon (10.0.0.1:59310). Jan 29 11:09:04.623854 systemd-logind[1471]: Removed session 4. Jan 29 11:09:04.649059 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 59310 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:09:04.651554 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:04.671012 systemd-logind[1471]: New session 5 of user core. Jan 29 11:09:04.687128 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:09:04.814392 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:09:04.814928 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:04.841922 sudo[1632]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:04.845107 sshd[1631]: Connection closed by 10.0.0.1 port 59310 Jan 29 11:09:04.847438 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:04.867944 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:59310.service: Deactivated successfully. Jan 29 11:09:04.874071 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:09:04.877051 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:09:04.887550 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:59312.service - OpenSSH per-connection server daemon (10.0.0.1:59312). Jan 29 11:09:04.891594 systemd-logind[1471]: Removed session 5. Jan 29 11:09:04.939218 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 59312 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:09:04.941162 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:04.952499 systemd-logind[1471]: New session 6 of user core. Jan 29 11:09:04.967100 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:09:05.037319 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:09:05.037889 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:05.043615 sudo[1641]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:05.052752 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:09:05.053285 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:05.098328 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:09:05.183090 augenrules[1663]: No rules Jan 29 11:09:05.185437 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:09:05.185782 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:09:05.190864 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:05.193606 sshd[1639]: Connection closed by 10.0.0.1 port 59312 Jan 29 11:09:05.194994 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:05.209263 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:59312.service: Deactivated successfully. Jan 29 11:09:05.211840 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:09:05.214140 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:09:05.247271 systemd[1]: Started sshd@7-10.0.0.46:22-10.0.0.1:59324.service - OpenSSH per-connection server daemon (10.0.0.1:59324). Jan 29 11:09:05.249072 systemd-logind[1471]: Removed session 6. Jan 29 11:09:05.306337 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 59324 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:09:05.307456 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:05.318255 systemd-logind[1471]: New session 7 of user core. Jan 29 11:09:05.330157 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:09:05.389855 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:09:05.390294 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:05.770435 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:09:05.770518 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:09:06.095452 dockerd[1695]: time="2025-01-29T11:09:06.095351898Z" level=info msg="Starting up" Jan 29 11:09:06.353470 dockerd[1695]: time="2025-01-29T11:09:06.353092845Z" level=info msg="Loading containers: start." Jan 29 11:09:06.580800 kernel: Initializing XFRM netlink socket Jan 29 11:09:06.709994 systemd-networkd[1420]: docker0: Link UP Jan 29 11:09:06.775553 dockerd[1695]: time="2025-01-29T11:09:06.774665544Z" level=info msg="Loading containers: done." Jan 29 11:09:06.829062 dockerd[1695]: time="2025-01-29T11:09:06.828487038Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:09:06.829062 dockerd[1695]: time="2025-01-29T11:09:06.828627412Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:09:06.829062 dockerd[1695]: time="2025-01-29T11:09:06.828828529Z" level=info msg="Daemon has completed initialization" Jan 29 11:09:06.908178 dockerd[1695]: time="2025-01-29T11:09:06.908073752Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:09:06.909365 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:09:08.060591 containerd[1488]: time="2025-01-29T11:09:08.060537082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:09:08.739296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288238550.mount: Deactivated successfully. Jan 29 11:09:09.277590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:09:09.289010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:09.491716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:09.497740 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:09.585354 kubelet[1952]: E0129 11:09:09.584884 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:09.595162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:09.595447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:11.591645 containerd[1488]: time="2025-01-29T11:09:11.591525295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:11.595819 containerd[1488]: time="2025-01-29T11:09:11.595699944Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:09:11.599180 containerd[1488]: time="2025-01-29T11:09:11.599045869Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:11.607653 containerd[1488]: time="2025-01-29T11:09:11.604038562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:11.607653 containerd[1488]: time="2025-01-29T11:09:11.605005055Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 3.544421125s" Jan 29 11:09:11.607653 containerd[1488]: time="2025-01-29T11:09:11.605041333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:09:11.613398 containerd[1488]: time="2025-01-29T11:09:11.613035444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:09:16.924087 containerd[1488]: time="2025-01-29T11:09:16.923467059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:16.925807 containerd[1488]: time="2025-01-29T11:09:16.925684959Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:09:16.933672 containerd[1488]: time="2025-01-29T11:09:16.933546552Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:16.935563 containerd[1488]: time="2025-01-29T11:09:16.935510295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:16.937058 containerd[1488]: time="2025-01-29T11:09:16.937009807Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 5.323906364s" Jan 29 11:09:16.937058 containerd[1488]: time="2025-01-29T11:09:16.937052146Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:09:16.937945 containerd[1488]: time="2025-01-29T11:09:16.937667961Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:09:19.777965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:09:19.794417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:20.045255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:20.055500 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:20.148434 kubelet[1981]: E0129 11:09:20.148376 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:20.153964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:20.154526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:21.440674 containerd[1488]: time="2025-01-29T11:09:21.440589897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:21.450681 containerd[1488]: time="2025-01-29T11:09:21.450561517Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:09:21.453336 containerd[1488]: time="2025-01-29T11:09:21.453277902Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:21.491426 containerd[1488]: time="2025-01-29T11:09:21.491300978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:21.492989 containerd[1488]: time="2025-01-29T11:09:21.492901800Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 4.555185088s" Jan 29 11:09:21.492989 containerd[1488]: time="2025-01-29T11:09:21.492960541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:09:21.493546 containerd[1488]: time="2025-01-29T11:09:21.493511945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:09:22.940301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800899956.mount: Deactivated successfully. Jan 29 11:09:25.419900 containerd[1488]: time="2025-01-29T11:09:25.419815063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:25.453267 containerd[1488]: time="2025-01-29T11:09:25.453130317Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:09:25.467728 containerd[1488]: time="2025-01-29T11:09:25.467636123Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:25.498005 containerd[1488]: time="2025-01-29T11:09:25.497934239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:25.498717 containerd[1488]: time="2025-01-29T11:09:25.498686486Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 4.005139735s" Jan 29 11:09:25.498795 containerd[1488]: time="2025-01-29T11:09:25.498718036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:09:25.499586 containerd[1488]: time="2025-01-29T11:09:25.499350472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:09:26.792634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500409496.mount: Deactivated successfully. Jan 29 11:09:28.037620 containerd[1488]: time="2025-01-29T11:09:28.037548779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:28.058716 containerd[1488]: time="2025-01-29T11:09:28.058665048Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:09:28.122471 containerd[1488]: time="2025-01-29T11:09:28.122403273Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:28.166795 containerd[1488]: time="2025-01-29T11:09:28.166721924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:28.168016 containerd[1488]: time="2025-01-29T11:09:28.167980743Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.668595475s" Jan 29 11:09:28.168016 containerd[1488]: time="2025-01-29T11:09:28.168011753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:09:28.168530 containerd[1488]: time="2025-01-29T11:09:28.168496621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:09:29.686755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223325096.mount: Deactivated successfully. Jan 29 11:09:29.693131 containerd[1488]: time="2025-01-29T11:09:29.693062559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:29.693827 containerd[1488]: time="2025-01-29T11:09:29.693739183Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:09:29.695219 containerd[1488]: time="2025-01-29T11:09:29.695174888Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:29.698267 containerd[1488]: time="2025-01-29T11:09:29.698226752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:29.699212 containerd[1488]: time="2025-01-29T11:09:29.699172410Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.530638888s" Jan 29 11:09:29.699212 containerd[1488]: time="2025-01-29T11:09:29.699217987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:09:29.699794 containerd[1488]: time="2025-01-29T11:09:29.699754302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:09:30.214645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:09:30.222964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:30.224389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259648095.mount: Deactivated successfully. Jan 29 11:09:30.399588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:30.404340 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:30.444582 kubelet[2058]: E0129 11:09:30.444495 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:30.448591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:30.448825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:32.886052 containerd[1488]: time="2025-01-29T11:09:32.885978111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:32.886894 containerd[1488]: time="2025-01-29T11:09:32.886825396Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:09:32.888060 containerd[1488]: time="2025-01-29T11:09:32.888013249Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:32.891785 containerd[1488]: time="2025-01-29T11:09:32.891711175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:32.892817 containerd[1488]: time="2025-01-29T11:09:32.892788658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.192920037s" Jan 29 11:09:32.892876 containerd[1488]: time="2025-01-29T11:09:32.892816761Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:09:35.207413 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:35.219958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:35.245983 systemd[1]: Reloading requested from client PID 2146 ('systemctl') (unit session-7.scope)... Jan 29 11:09:35.245998 systemd[1]: Reloading... Jan 29 11:09:35.336790 zram_generator::config[2188]: No configuration found. Jan 29 11:09:35.628026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:09:35.709428 systemd[1]: Reloading finished in 463 ms. Jan 29 11:09:35.758278 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:35.761783 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:09:35.762027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:35.763790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:35.916804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:35.921472 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:09:35.970936 kubelet[2235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:09:35.970936 kubelet[2235]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:09:35.970936 kubelet[2235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:09:35.971342 kubelet[2235]: I0129 11:09:35.970993 2235 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:09:36.225116 kubelet[2235]: I0129 11:09:36.225009 2235 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:09:36.225116 kubelet[2235]: I0129 11:09:36.225041 2235 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:09:36.225296 kubelet[2235]: I0129 11:09:36.225275 2235 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:09:36.340563 kubelet[2235]: I0129 11:09:36.340508 2235 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:09:36.341363 kubelet[2235]: E0129 11:09:36.341306 2235 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:36.361468 kubelet[2235]: E0129 11:09:36.361421 2235 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:09:36.361468 kubelet[2235]: I0129 11:09:36.361455 2235 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:09:36.405753 kubelet[2235]: I0129 11:09:36.404799 2235 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:09:36.405753 kubelet[2235]: I0129 11:09:36.404913 2235 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:09:36.405753 kubelet[2235]: I0129 11:09:36.405067 2235 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:09:36.405753 kubelet[2235]: I0129 11:09:36.405092 2235 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:09:36.406020 kubelet[2235]: I0129 11:09:36.405419 2235 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:09:36.406020 kubelet[2235]: I0129 11:09:36.405431 2235 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:09:36.406020 kubelet[2235]: I0129 11:09:36.405600 2235 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:09:36.409527 kubelet[2235]: W0129 11:09:36.409472 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:36.409527 kubelet[2235]: E0129 11:09:36.409521 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:36.414652 kubelet[2235]: I0129 11:09:36.414623 2235 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:09:36.414652 kubelet[2235]: I0129 11:09:36.414646 2235 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:09:36.414715 kubelet[2235]: I0129 11:09:36.414681 2235 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:09:36.414715 kubelet[2235]: I0129 11:09:36.414701 2235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:09:36.415141 kubelet[2235]: W0129 11:09:36.415096 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:36.415194 kubelet[2235]: E0129 11:09:36.415142 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:36.433822 kubelet[2235]: I0129 11:09:36.433797 2235 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:09:36.452713 kubelet[2235]: I0129 11:09:36.452687 2235 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:09:36.452806 kubelet[2235]: W0129 11:09:36.452785 2235 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:09:36.455419 kubelet[2235]: I0129 11:09:36.453572 2235 server.go:1269] "Started kubelet" Jan 29 11:09:36.455419 kubelet[2235]: I0129 11:09:36.453643 2235 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:09:36.455419 kubelet[2235]: I0129 11:09:36.454272 2235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:09:36.455419 kubelet[2235]: I0129 11:09:36.454707 2235 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:09:36.455419 kubelet[2235]: I0129 11:09:36.454816 2235 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:09:36.455419 kubelet[2235]: I0129 11:09:36.455301 2235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:09:36.456166 kubelet[2235]: I0129 11:09:36.456103 2235 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:09:36.457078 kubelet[2235]: I0129 11:09:36.457063 2235 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:09:36.457248 kubelet[2235]: I0129 11:09:36.457212 2235 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:09:36.457391 kubelet[2235]: I0129 11:09:36.457289 2235 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:09:36.457727 kubelet[2235]: W0129 11:09:36.457571 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:36.457727 kubelet[2235]: E0129 11:09:36.457627 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:36.457832 kubelet[2235]: E0129 11:09:36.457775 2235 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:09:36.458029 kubelet[2235]: I0129 11:09:36.458011 2235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:09:36.458736 kubelet[2235]: E0129 11:09:36.458716 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:36.458802 kubelet[2235]: E0129 11:09:36.458784 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="200ms" Jan 29 11:09:36.458911 kubelet[2235]: I0129 11:09:36.458894 2235 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:09:36.458911 kubelet[2235]: I0129 11:09:36.458907 2235 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:09:36.463688 kubelet[2235]: E0129 11:09:36.461164 2235 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f2548f438f10d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:09:36.453546253 +0000 UTC m=+0.528278613,LastTimestamp:2025-01-29 11:09:36.453546253 +0000 UTC m=+0.528278613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:09:36.476438 kubelet[2235]: I0129 11:09:36.475817 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:09:36.477463 kubelet[2235]: I0129 11:09:36.477441 2235 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:09:36.477463 kubelet[2235]: I0129 11:09:36.477458 2235 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:09:36.477547 kubelet[2235]: I0129 11:09:36.477478 2235 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:09:36.479109 kubelet[2235]: I0129 11:09:36.479065 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:09:36.479645 kubelet[2235]: I0129 11:09:36.479127 2235 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:09:36.479645 kubelet[2235]: I0129 11:09:36.479156 2235 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:09:36.479645 kubelet[2235]: E0129 11:09:36.479214 2235 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:09:36.559074 kubelet[2235]: E0129 11:09:36.559016 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:36.580242 kubelet[2235]: E0129 11:09:36.580188 2235 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:09:36.659632 kubelet[2235]: E0129 11:09:36.659579 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:36.660048 kubelet[2235]: E0129 11:09:36.659995 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="400ms" Jan 29 11:09:36.760666 kubelet[2235]: E0129 11:09:36.760520 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:36.780849 kubelet[2235]: E0129 11:09:36.780783 2235 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:09:36.861465 kubelet[2235]: E0129 11:09:36.861400 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:36.962389 kubelet[2235]: E0129 11:09:36.962308 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:37.126812 kubelet[2235]: E0129 11:09:37.126522 2235 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f2548f438f10d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:09:36.453546253 +0000 UTC m=+0.528278613,LastTimestamp:2025-01-29 11:09:36.453546253 +0000 UTC m=+0.528278613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:09:37.126812 kubelet[2235]: E0129 11:09:37.126690 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:37.127424 kubelet[2235]: E0129 11:09:37.127063 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="800ms" Jan 29 11:09:37.127424 kubelet[2235]: W0129 11:09:37.127102 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:37.127424 kubelet[2235]: E0129 11:09:37.127160 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:37.173036 kubelet[2235]: I0129 11:09:37.172983 2235 policy_none.go:49] "None policy: Start" Jan 29 11:09:37.173910 kubelet[2235]: I0129 11:09:37.173883 2235 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:09:37.173910 kubelet[2235]: I0129 11:09:37.173912 2235 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:09:37.181294 kubelet[2235]: E0129 11:09:37.181256 2235 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:09:37.226849 kubelet[2235]: E0129 11:09:37.226825 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:37.230066 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:09:37.244003 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:09:37.247356 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:09:37.260919 kubelet[2235]: I0129 11:09:37.260869 2235 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:09:37.261155 kubelet[2235]: I0129 11:09:37.261129 2235 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:09:37.261200 kubelet[2235]: I0129 11:09:37.261149 2235 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:09:37.261466 kubelet[2235]: I0129 11:09:37.261410 2235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:09:37.262254 kubelet[2235]: E0129 11:09:37.262238 2235 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:09:37.332262 kubelet[2235]: W0129 11:09:37.332190 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:37.332408 kubelet[2235]: E0129 11:09:37.332264 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:37.363300 kubelet[2235]: I0129 11:09:37.363270 2235 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:09:37.363665 kubelet[2235]: E0129 11:09:37.363625 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 29 11:09:37.565940 kubelet[2235]: I0129 11:09:37.565891 2235 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:09:37.566250 kubelet[2235]: E0129 11:09:37.566217 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 29 11:09:37.656297 update_engine[1474]: I20250129 11:09:37.656181 1474 update_attempter.cc:509] Updating boot flags... Jan 29 11:09:37.681458 kubelet[2235]: W0129 11:09:37.681384 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:37.681458 kubelet[2235]: E0129 11:09:37.681455 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:37.928469 kubelet[2235]: E0129 11:09:37.928279 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="1.6s" Jan 29 11:09:37.968429 kubelet[2235]: I0129 11:09:37.968377 2235 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:09:37.968826 kubelet[2235]: E0129 11:09:37.968783 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 29 11:09:37.990248 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:09:38.008368 kubelet[2235]: W0129 11:09:38.008196 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:38.008368 kubelet[2235]: E0129 11:09:38.008268 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:38.008726 systemd[1]: Created slice kubepods-burstable-pod5535ba4b9ca24ab4709369cf087cc06d.slice - libcontainer container kubepods-burstable-pod5535ba4b9ca24ab4709369cf087cc06d.slice. Jan 29 11:09:38.012630 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:09:38.047854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2275) Jan 29 11:09:38.089817 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2278) Jan 29 11:09:38.126095 kubelet[2235]: W0129 11:09:38.126036 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 29 11:09:38.126211 kubelet[2235]: E0129 11:09:38.126099 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:38.130474 kubelet[2235]: I0129 11:09:38.130451 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:38.130803 kubelet[2235]: I0129 11:09:38.130476 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:38.130803 kubelet[2235]: I0129 11:09:38.130497 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:38.130803 kubelet[2235]: I0129 11:09:38.130513 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:09:38.130803 kubelet[2235]: I0129 11:09:38.130530 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5535ba4b9ca24ab4709369cf087cc06d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5535ba4b9ca24ab4709369cf087cc06d\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:38.130803 kubelet[2235]: I0129 11:09:38.130609 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5535ba4b9ca24ab4709369cf087cc06d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5535ba4b9ca24ab4709369cf087cc06d\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:38.130933 kubelet[2235]: I0129 11:09:38.130623 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:38.130933 kubelet[2235]: I0129 11:09:38.130647 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:38.130933 kubelet[2235]: I0129 11:09:38.130663 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5535ba4b9ca24ab4709369cf087cc06d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5535ba4b9ca24ab4709369cf087cc06d\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:38.306837 kubelet[2235]: E0129 11:09:38.306792 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:38.307693 containerd[1488]: time="2025-01-29T11:09:38.307639618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:38.311797 kubelet[2235]: E0129 11:09:38.311753 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:38.312191 containerd[1488]: time="2025-01-29T11:09:38.312160399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5535ba4b9ca24ab4709369cf087cc06d,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:38.314732 kubelet[2235]: E0129 11:09:38.314686 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:38.315079 containerd[1488]: time="2025-01-29T11:09:38.315050288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:38.428532 kubelet[2235]: E0129 11:09:38.428483 2235 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:09:38.771097 kubelet[2235]: I0129 11:09:38.770882 2235 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:09:38.771234 kubelet[2235]: E0129 11:09:38.771199 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 29 11:09:38.799024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975577020.mount: Deactivated successfully. Jan 29 11:09:38.805061 containerd[1488]: time="2025-01-29T11:09:38.805014393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:09:38.808080 containerd[1488]: time="2025-01-29T11:09:38.807865086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:09:38.809054 containerd[1488]: time="2025-01-29T11:09:38.809009736Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:09:38.810909 containerd[1488]: time="2025-01-29T11:09:38.810863911Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:09:38.811820 containerd[1488]: time="2025-01-29T11:09:38.811779066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:09:38.812654 containerd[1488]: time="2025-01-29T11:09:38.812619059Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:09:38.813510 containerd[1488]: time="2025-01-29T11:09:38.813453711Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:09:38.816777 containerd[1488]: time="2025-01-29T11:09:38.816718459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:09:38.819188 containerd[1488]: time="2025-01-29T11:09:38.819126765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.904529ms" Jan 29 11:09:38.820340 containerd[1488]: time="2025-01-29T11:09:38.820287105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.526336ms" Jan 29 11:09:38.822917 containerd[1488]: time="2025-01-29T11:09:38.822866335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.742457ms" Jan 29 11:09:39.072371 containerd[1488]: time="2025-01-29T11:09:39.072238927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:39.072806 containerd[1488]: time="2025-01-29T11:09:39.072720709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:39.072891 containerd[1488]: time="2025-01-29T11:09:39.072747110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:39.072891 containerd[1488]: time="2025-01-29T11:09:39.072825067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:39.072891 containerd[1488]: time="2025-01-29T11:09:39.072841819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:39.073157 containerd[1488]: time="2025-01-29T11:09:39.072945255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:39.073263 containerd[1488]: time="2025-01-29T11:09:39.073219435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:39.073519 containerd[1488]: time="2025-01-29T11:09:39.073434051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:39.134908 systemd[1]: Started cri-containerd-8e45bbe361d136a9c7e791e8b37696e9271ec9f52dc441dded9ea0edc97fc3dd.scope - libcontainer container 8e45bbe361d136a9c7e791e8b37696e9271ec9f52dc441dded9ea0edc97fc3dd. Jan 29 11:09:39.136952 containerd[1488]: time="2025-01-29T11:09:39.134718330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:39.136952 containerd[1488]: time="2025-01-29T11:09:39.134805965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:39.136952 containerd[1488]: time="2025-01-29T11:09:39.134822207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:39.136952 containerd[1488]: time="2025-01-29T11:09:39.134907698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:39.149941 systemd[1]: Started cri-containerd-53c619a9343b2ed75eca8c58e936867373ab4bffedc42ae51520faffce901201.scope - libcontainer container 53c619a9343b2ed75eca8c58e936867373ab4bffedc42ae51520faffce901201. Jan 29 11:09:39.172954 systemd[1]: Started cri-containerd-4be63aa4494d43f978d1474a595b555ad1e25281c300de95557fcaf21dcdd192.scope - libcontainer container 4be63aa4494d43f978d1474a595b555ad1e25281c300de95557fcaf21dcdd192. Jan 29 11:09:39.189522 containerd[1488]: time="2025-01-29T11:09:39.189473620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e45bbe361d136a9c7e791e8b37696e9271ec9f52dc441dded9ea0edc97fc3dd\"" Jan 29 11:09:39.190916 kubelet[2235]: E0129 11:09:39.190873 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:39.194429 containerd[1488]: time="2025-01-29T11:09:39.193522790Z" level=info msg="CreateContainer within sandbox \"8e45bbe361d136a9c7e791e8b37696e9271ec9f52dc441dded9ea0edc97fc3dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:09:39.232953 containerd[1488]: time="2025-01-29T11:09:39.232819024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be63aa4494d43f978d1474a595b555ad1e25281c300de95557fcaf21dcdd192\"" Jan 29 11:09:39.232953 containerd[1488]: time="2025-01-29T11:09:39.232896631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5535ba4b9ca24ab4709369cf087cc06d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c619a9343b2ed75eca8c58e936867373ab4bffedc42ae51520faffce901201\"" Jan 29 11:09:39.234076 kubelet[2235]: E0129 11:09:39.234022 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:39.234506 kubelet[2235]: E0129 11:09:39.234471 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:39.236349 containerd[1488]: time="2025-01-29T11:09:39.236297393Z" level=info msg="CreateContainer within sandbox \"53c619a9343b2ed75eca8c58e936867373ab4bffedc42ae51520faffce901201\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:09:39.236992 containerd[1488]: time="2025-01-29T11:09:39.236747586Z" level=info msg="CreateContainer within sandbox \"4be63aa4494d43f978d1474a595b555ad1e25281c300de95557fcaf21dcdd192\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:09:39.372049 containerd[1488]: time="2025-01-29T11:09:39.371904891Z" level=info msg="CreateContainer within sandbox \"8e45bbe361d136a9c7e791e8b37696e9271ec9f52dc441dded9ea0edc97fc3dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2017f9c020920d1f016e6911b18303134e0c1645d8541b9658033f2cb6ae930\"" Jan 29 11:09:39.372827 containerd[1488]: time="2025-01-29T11:09:39.372787353Z" level=info msg="StartContainer for \"d2017f9c020920d1f016e6911b18303134e0c1645d8541b9658033f2cb6ae930\"" Jan 29 11:09:39.378151 containerd[1488]: time="2025-01-29T11:09:39.378114615Z" level=info msg="CreateContainer within sandbox \"4be63aa4494d43f978d1474a595b555ad1e25281c300de95557fcaf21dcdd192\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"938f453e2e631b0523b63c79174bc98c9f2a0c927d017f1517aae07c19529486\"" Jan 29 11:09:39.378681 containerd[1488]: time="2025-01-29T11:09:39.378654077Z" level=info msg="StartContainer for \"938f453e2e631b0523b63c79174bc98c9f2a0c927d017f1517aae07c19529486\"" Jan 29 11:09:39.380168 containerd[1488]: time="2025-01-29T11:09:39.380136314Z" level=info msg="CreateContainer within sandbox \"53c619a9343b2ed75eca8c58e936867373ab4bffedc42ae51520faffce901201\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"551c6ceb7bca2245ca3867bb6f3dae58a62bbe678b13bf07449d59e156cafb2b\"" Jan 29 11:09:39.381123 containerd[1488]: time="2025-01-29T11:09:39.381102495Z" level=info msg="StartContainer for \"551c6ceb7bca2245ca3867bb6f3dae58a62bbe678b13bf07449d59e156cafb2b\"" Jan 29 11:09:39.426992 systemd[1]: Started cri-containerd-d2017f9c020920d1f016e6911b18303134e0c1645d8541b9658033f2cb6ae930.scope - libcontainer container d2017f9c020920d1f016e6911b18303134e0c1645d8541b9658033f2cb6ae930. Jan 29 11:09:39.431233 systemd[1]: Started cri-containerd-938f453e2e631b0523b63c79174bc98c9f2a0c927d017f1517aae07c19529486.scope - libcontainer container 938f453e2e631b0523b63c79174bc98c9f2a0c927d017f1517aae07c19529486. Jan 29 11:09:39.437941 systemd[1]: Started cri-containerd-551c6ceb7bca2245ca3867bb6f3dae58a62bbe678b13bf07449d59e156cafb2b.scope - libcontainer container 551c6ceb7bca2245ca3867bb6f3dae58a62bbe678b13bf07449d59e156cafb2b. Jan 29 11:09:39.505700 containerd[1488]: time="2025-01-29T11:09:39.505646604Z" level=info msg="StartContainer for \"d2017f9c020920d1f016e6911b18303134e0c1645d8541b9658033f2cb6ae930\" returns successfully" Jan 29 11:09:39.505700 containerd[1488]: time="2025-01-29T11:09:39.505672303Z" level=info msg="StartContainer for \"551c6ceb7bca2245ca3867bb6f3dae58a62bbe678b13bf07449d59e156cafb2b\" returns successfully" Jan 29 11:09:39.505915 containerd[1488]: time="2025-01-29T11:09:39.505669157Z" level=info msg="StartContainer for \"938f453e2e631b0523b63c79174bc98c9f2a0c927d017f1517aae07c19529486\" returns successfully" Jan 29 11:09:39.513788 kubelet[2235]: E0129 11:09:39.512315 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:39.515341 kubelet[2235]: E0129 11:09:39.515311 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:39.528877 kubelet[2235]: E0129 11:09:39.528824 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="3.2s" Jan 29 11:09:40.372655 kubelet[2235]: I0129 11:09:40.372612 2235 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:09:40.417600 kubelet[2235]: I0129 11:09:40.417541 2235 apiserver.go:52] "Watching apiserver" Jan 29 11:09:40.458021 kubelet[2235]: I0129 11:09:40.457961 2235 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:09:40.517662 kubelet[2235]: E0129 11:09:40.517628 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:40.519336 kubelet[2235]: E0129 11:09:40.519317 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:40.519990 kubelet[2235]: E0129 11:09:40.519959 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:40.536278 kubelet[2235]: I0129 11:09:40.536240 2235 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:09:41.877775 kubelet[2235]: E0129 11:09:41.877647 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:41.877775 kubelet[2235]: E0129 11:09:41.877714 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:42.521891 kubelet[2235]: E0129 11:09:42.521865 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:42.521891 kubelet[2235]: E0129 11:09:42.521909 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:42.658319 kubelet[2235]: E0129 11:09:42.658250 2235 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:42.911463 systemd[1]: Reloading requested from client PID 2531 ('systemctl') (unit session-7.scope)... Jan 29 11:09:42.911480 systemd[1]: Reloading... Jan 29 11:09:43.012784 zram_generator::config[2571]: No configuration found. Jan 29 11:09:43.185516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:09:43.282962 systemd[1]: Reloading finished in 371 ms. Jan 29 11:09:43.325878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:43.345339 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:09:43.345602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:43.361232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:43.509923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:43.515755 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:09:43.561785 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:09:43.561785 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:09:43.561785 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:09:43.561785 kubelet[2615]: I0129 11:09:43.561532 2615 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:09:43.567721 kubelet[2615]: I0129 11:09:43.567687 2615 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:09:43.567721 kubelet[2615]: I0129 11:09:43.567708 2615 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:09:43.568018 kubelet[2615]: I0129 11:09:43.568000 2615 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:09:43.570334 kubelet[2615]: I0129 11:09:43.570296 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:09:43.572451 kubelet[2615]: I0129 11:09:43.572418 2615 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:09:43.577777 kubelet[2615]: E0129 11:09:43.575293 2615 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:09:43.577777 kubelet[2615]: I0129 11:09:43.575323 2615 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:09:43.579861 kubelet[2615]: I0129 11:09:43.579840 2615 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:09:43.579984 kubelet[2615]: I0129 11:09:43.579966 2615 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:09:43.580139 kubelet[2615]: I0129 11:09:43.580112 2615 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:09:43.580280 kubelet[2615]: I0129 11:09:43.580134 2615 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:09:43.580362 kubelet[2615]: I0129 11:09:43.580283 2615 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:09:43.580362 kubelet[2615]: I0129 11:09:43.580293 2615 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:09:43.580362 kubelet[2615]: I0129 11:09:43.580321 2615 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:09:43.580437 kubelet[2615]: I0129 11:09:43.580417 2615 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:09:43.580437 kubelet[2615]: I0129 11:09:43.580431 2615 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:09:43.580473 kubelet[2615]: I0129 11:09:43.580464 2615 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:09:43.580504 kubelet[2615]: I0129 11:09:43.580495 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:09:43.581299 kubelet[2615]: I0129 11:09:43.581256 2615 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:09:43.581696 kubelet[2615]: I0129 11:09:43.581673 2615 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:09:43.584346 kubelet[2615]: I0129 11:09:43.582089 2615 server.go:1269] "Started kubelet" Jan 29 11:09:43.584346 kubelet[2615]: I0129 11:09:43.583988 2615 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:09:43.589109 kubelet[2615]: I0129 11:09:43.588783 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:09:43.592205 kubelet[2615]: I0129 11:09:43.591334 2615 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:09:43.592415 kubelet[2615]: I0129 11:09:43.590205 2615 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:09:43.593665 kubelet[2615]: I0129 11:09:43.591099 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:09:43.593834 kubelet[2615]: I0129 11:09:43.590159 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:09:43.594934 kubelet[2615]: E0129 11:09:43.594882 2615 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:09:43.596094 kubelet[2615]: I0129 11:09:43.596065 2615 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:09:43.596157 kubelet[2615]: I0129 11:09:43.596132 2615 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:09:43.596193 kubelet[2615]: I0129 11:09:43.596179 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:09:43.596228 kubelet[2615]: I0129 11:09:43.596219 2615 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:09:43.596497 kubelet[2615]: I0129 11:09:43.596393 2615 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:09:43.597003 kubelet[2615]: E0129 11:09:43.596985 2615 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:09:43.598018 kubelet[2615]: I0129 11:09:43.598000 2615 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:09:43.607858 kubelet[2615]: I0129 11:09:43.607820 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:09:43.609234 kubelet[2615]: I0129 11:09:43.609210 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:09:43.609296 kubelet[2615]: I0129 11:09:43.609242 2615 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:09:43.609296 kubelet[2615]: I0129 11:09:43.609258 2615 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:09:43.609343 kubelet[2615]: E0129 11:09:43.609308 2615 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:09:43.636647 kubelet[2615]: I0129 11:09:43.636617 2615 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:09:43.636647 kubelet[2615]: I0129 11:09:43.636636 2615 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:09:43.636810 kubelet[2615]: I0129 11:09:43.636666 2615 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:09:43.636859 kubelet[2615]: I0129 11:09:43.636823 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:09:43.636859 kubelet[2615]: I0129 11:09:43.636833 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:09:43.636859 kubelet[2615]: I0129 11:09:43.636850 2615 policy_none.go:49] "None policy: Start" Jan 29 11:09:43.637281 kubelet[2615]: I0129 11:09:43.637266 2615 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:09:43.637322 kubelet[2615]: I0129 11:09:43.637285 2615 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:09:43.637450 kubelet[2615]: I0129 11:09:43.637436 2615 state_mem.go:75] "Updated machine memory state" Jan 29 11:09:43.641369 kubelet[2615]: I0129 11:09:43.641206 2615 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:09:43.641440 kubelet[2615]: I0129 11:09:43.641384 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:09:43.641440 kubelet[2615]: I0129 11:09:43.641395 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:09:43.641732 kubelet[2615]: I0129 11:09:43.641545 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:09:43.715160 kubelet[2615]: E0129 11:09:43.715119 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:09:43.715390 kubelet[2615]: E0129 11:09:43.715368 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:43.715439 kubelet[2615]: E0129 11:09:43.715375 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:43.745463 kubelet[2615]: I0129 11:09:43.745438 2615 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:09:43.751091 kubelet[2615]: I0129 11:09:43.751043 2615 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:09:43.751203 kubelet[2615]: I0129 11:09:43.751138 2615 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:09:43.882542 sudo[2654]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:09:43.882930 sudo[2654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:09:43.897092 kubelet[2615]: I0129 11:09:43.897028 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:43.897092 kubelet[2615]: I0129 11:09:43.897075 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:43.897092 kubelet[2615]: I0129 11:09:43.897094 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:09:43.897336 kubelet[2615]: I0129 11:09:43.897116 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5535ba4b9ca24ab4709369cf087cc06d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5535ba4b9ca24ab4709369cf087cc06d\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:43.897336 kubelet[2615]: I0129 11:09:43.897132 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:43.897336 kubelet[2615]: I0129 11:09:43.897146 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:43.897336 kubelet[2615]: I0129 11:09:43.897160 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:43.897336 kubelet[2615]: I0129 11:09:43.897174 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5535ba4b9ca24ab4709369cf087cc06d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5535ba4b9ca24ab4709369cf087cc06d\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:43.897487 kubelet[2615]: I0129 11:09:43.897187 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5535ba4b9ca24ab4709369cf087cc06d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5535ba4b9ca24ab4709369cf087cc06d\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:44.016094 kubelet[2615]: E0129 11:09:44.016007 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:44.016094 kubelet[2615]: E0129 11:09:44.016008 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:44.016094 kubelet[2615]: E0129 11:09:44.016018 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:44.534032 sudo[2654]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:44.581087 kubelet[2615]: I0129 11:09:44.581045 2615 apiserver.go:52] "Watching apiserver" Jan 29 11:09:44.596588 kubelet[2615]: I0129 11:09:44.596557 2615 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:09:44.622686 kubelet[2615]: E0129 11:09:44.622637 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:44.775633 kubelet[2615]: E0129 11:09:44.775315 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:09:44.775633 kubelet[2615]: E0129 11:09:44.775528 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:44.775970 kubelet[2615]: E0129 11:09:44.775916 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:09:44.776179 kubelet[2615]: E0129 11:09:44.776152 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:44.846078 kubelet[2615]: I0129 11:09:44.845933 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.845903247 podStartE2EDuration="3.845903247s" podCreationTimestamp="2025-01-29 11:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:09:44.775126899 +0000 UTC m=+1.250398567" watchObservedRunningTime="2025-01-29 11:09:44.845903247 +0000 UTC m=+1.321174895" Jan 29 11:09:44.858790 kubelet[2615]: I0129 11:09:44.856946 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.85692535 podStartE2EDuration="3.85692535s" podCreationTimestamp="2025-01-29 11:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:09:44.846336345 +0000 UTC m=+1.321607993" watchObservedRunningTime="2025-01-29 11:09:44.85692535 +0000 UTC m=+1.332196998" Jan 29 11:09:44.866037 kubelet[2615]: I0129 11:09:44.865940 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.865918539 podStartE2EDuration="2.865918539s" podCreationTimestamp="2025-01-29 11:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:09:44.857808498 +0000 UTC m=+1.333080146" watchObservedRunningTime="2025-01-29 11:09:44.865918539 +0000 UTC m=+1.341190187" Jan 29 11:09:45.623818 kubelet[2615]: E0129 11:09:45.623724 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:45.624268 kubelet[2615]: E0129 11:09:45.623981 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:46.493798 sudo[1674]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:46.495110 sshd[1673]: Connection closed by 10.0.0.1 port 59324 Jan 29 11:09:46.495944 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:46.500170 systemd[1]: sshd@7-10.0.0.46:22-10.0.0.1:59324.service: Deactivated successfully. Jan 29 11:09:46.502041 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:09:46.502234 systemd[1]: session-7.scope: Consumed 4.851s CPU time, 153.2M memory peak, 0B memory swap peak. Jan 29 11:09:46.502697 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:09:46.503632 systemd-logind[1471]: Removed session 7. Jan 29 11:09:47.246606 kubelet[2615]: I0129 11:09:47.246565 2615 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:09:47.247121 containerd[1488]: time="2025-01-29T11:09:47.246886366Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:09:47.247878 kubelet[2615]: I0129 11:09:47.247127 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:09:48.009837 kubelet[2615]: W0129 11:09:48.009786 2615 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:09:48.010089 kubelet[2615]: E0129 11:09:48.009837 2615 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 29 11:09:48.010089 kubelet[2615]: W0129 11:09:48.009972 2615 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:09:48.010089 kubelet[2615]: E0129 11:09:48.009992 2615 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 29 11:09:48.014944 systemd[1]: Created slice kubepods-besteffort-pod0897b192_6e1b_4124_8714_4674abd81737.slice - libcontainer container kubepods-besteffort-pod0897b192_6e1b_4124_8714_4674abd81737.slice. Jan 29 11:09:48.028419 kubelet[2615]: I0129 11:09:48.026546 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-kernel\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028419 kubelet[2615]: I0129 11:09:48.026588 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0897b192-6e1b-4124-8714-4674abd81737-xtables-lock\") pod \"kube-proxy-4kwmb\" (UID: \"0897b192-6e1b-4124-8714-4674abd81737\") " pod="kube-system/kube-proxy-4kwmb" Jan 29 11:09:48.028419 kubelet[2615]: I0129 11:09:48.026610 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0897b192-6e1b-4124-8714-4674abd81737-lib-modules\") pod \"kube-proxy-4kwmb\" (UID: \"0897b192-6e1b-4124-8714-4674abd81737\") " pod="kube-system/kube-proxy-4kwmb" Jan 29 11:09:48.028419 kubelet[2615]: I0129 11:09:48.026630 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-etc-cni-netd\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028419 kubelet[2615]: I0129 11:09:48.026650 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g2h5\" (UniqueName: \"kubernetes.io/projected/0897b192-6e1b-4124-8714-4674abd81737-kube-api-access-5g2h5\") pod \"kube-proxy-4kwmb\" (UID: \"0897b192-6e1b-4124-8714-4674abd81737\") " pod="kube-system/kube-proxy-4kwmb" Jan 29 11:09:48.027598 systemd[1]: Created slice kubepods-burstable-podfbe8f77f_9f94_4f7a_bbb0_d865a937b584.slice - libcontainer container kubepods-burstable-podfbe8f77f_9f94_4f7a_bbb0_d865a937b584.slice. Jan 29 11:09:48.028803 kubelet[2615]: I0129 11:09:48.026670 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cni-path\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028803 kubelet[2615]: I0129 11:09:48.026684 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-xtables-lock\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028803 kubelet[2615]: I0129 11:09:48.026700 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-clustermesh-secrets\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028803 kubelet[2615]: I0129 11:09:48.026714 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-net\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028803 kubelet[2615]: I0129 11:09:48.026729 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-run\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028803 kubelet[2615]: I0129 11:09:48.026742 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-cgroup\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028995 kubelet[2615]: I0129 11:09:48.026755 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-lib-modules\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028995 kubelet[2615]: I0129 11:09:48.026793 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hubble-tls\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028995 kubelet[2615]: I0129 11:09:48.026815 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-bpf-maps\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028995 kubelet[2615]: I0129 11:09:48.026832 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hostproc\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028995 kubelet[2615]: I0129 11:09:48.026849 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfsmx\" (UniqueName: \"kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-kube-api-access-gfsmx\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.028995 kubelet[2615]: I0129 11:09:48.026874 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0897b192-6e1b-4124-8714-4674abd81737-kube-proxy\") pod \"kube-proxy-4kwmb\" (UID: \"0897b192-6e1b-4124-8714-4674abd81737\") " pod="kube-system/kube-proxy-4kwmb" Jan 29 11:09:48.029182 kubelet[2615]: I0129 11:09:48.026907 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-config-path\") pod \"cilium-gclzx\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " pod="kube-system/cilium-gclzx" Jan 29 11:09:48.198932 systemd[1]: Created slice kubepods-besteffort-pod4a85546d_d1a8_4f00_bee2_692cea05a194.slice - libcontainer container kubepods-besteffort-pod4a85546d_d1a8_4f00_bee2_692cea05a194.slice. Jan 29 11:09:48.228583 kubelet[2615]: I0129 11:09:48.228501 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpd9h\" (UniqueName: \"kubernetes.io/projected/4a85546d-d1a8-4f00-bee2-692cea05a194-kube-api-access-qpd9h\") pod \"cilium-operator-5d85765b45-xpvhq\" (UID: \"4a85546d-d1a8-4f00-bee2-692cea05a194\") " pod="kube-system/cilium-operator-5d85765b45-xpvhq" Jan 29 11:09:48.228583 kubelet[2615]: I0129 11:09:48.228590 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a85546d-d1a8-4f00-bee2-692cea05a194-cilium-config-path\") pod \"cilium-operator-5d85765b45-xpvhq\" (UID: \"4a85546d-d1a8-4f00-bee2-692cea05a194\") " pod="kube-system/cilium-operator-5d85765b45-xpvhq" Jan 29 11:09:48.323391 kubelet[2615]: E0129 11:09:48.323348 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:48.324030 containerd[1488]: time="2025-01-29T11:09:48.323995594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kwmb,Uid:0897b192-6e1b-4124-8714-4674abd81737,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:48.883797 containerd[1488]: time="2025-01-29T11:09:48.883656587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:48.883952 containerd[1488]: time="2025-01-29T11:09:48.883803473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:48.883952 containerd[1488]: time="2025-01-29T11:09:48.883842537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:48.884017 containerd[1488]: time="2025-01-29T11:09:48.883927468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:48.910119 systemd[1]: Started cri-containerd-fc83ca59fd32db886e2a26ba34f5aca5b702c047977b1d526f46ae8ecb534191.scope - libcontainer container fc83ca59fd32db886e2a26ba34f5aca5b702c047977b1d526f46ae8ecb534191. Jan 29 11:09:48.937374 containerd[1488]: time="2025-01-29T11:09:48.937319125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kwmb,Uid:0897b192-6e1b-4124-8714-4674abd81737,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc83ca59fd32db886e2a26ba34f5aca5b702c047977b1d526f46ae8ecb534191\"" Jan 29 11:09:48.938377 kubelet[2615]: E0129 11:09:48.938348 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:48.942202 containerd[1488]: time="2025-01-29T11:09:48.942139306Z" level=info msg="CreateContainer within sandbox \"fc83ca59fd32db886e2a26ba34f5aca5b702c047977b1d526f46ae8ecb534191\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:09:48.989853 containerd[1488]: time="2025-01-29T11:09:48.989795056Z" level=info msg="CreateContainer within sandbox \"fc83ca59fd32db886e2a26ba34f5aca5b702c047977b1d526f46ae8ecb534191\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65be10a2c96d9047811ad51e6aa7d480bd18017af67e341e10ab81baedefbefa\"" Jan 29 11:09:48.991739 containerd[1488]: time="2025-01-29T11:09:48.990260323Z" level=info msg="StartContainer for \"65be10a2c96d9047811ad51e6aa7d480bd18017af67e341e10ab81baedefbefa\"" Jan 29 11:09:49.025079 systemd[1]: Started cri-containerd-65be10a2c96d9047811ad51e6aa7d480bd18017af67e341e10ab81baedefbefa.scope - libcontainer container 65be10a2c96d9047811ad51e6aa7d480bd18017af67e341e10ab81baedefbefa. Jan 29 11:09:49.060009 containerd[1488]: time="2025-01-29T11:09:49.059963255Z" level=info msg="StartContainer for \"65be10a2c96d9047811ad51e6aa7d480bd18017af67e341e10ab81baedefbefa\" returns successfully" Jan 29 11:09:49.128677 kubelet[2615]: E0129 11:09:49.128625 2615 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 11:09:49.128827 kubelet[2615]: E0129 11:09:49.128704 2615 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:09:49.128827 kubelet[2615]: E0129 11:09:49.128738 2615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-clustermesh-secrets podName:fbe8f77f-9f94-4f7a-bbb0-d865a937b584 nodeName:}" failed. No retries permitted until 2025-01-29 11:09:49.628716119 +0000 UTC m=+6.103987767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-clustermesh-secrets") pod "cilium-gclzx" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:09:49.128969 kubelet[2615]: E0129 11:09:49.128845 2615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-config-path podName:fbe8f77f-9f94-4f7a-bbb0-d865a937b584 nodeName:}" failed. No retries permitted until 2025-01-29 11:09:49.628825645 +0000 UTC m=+6.104097363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-config-path") pod "cilium-gclzx" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:09:49.330593 kubelet[2615]: E0129 11:09:49.330548 2615 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:09:49.331059 kubelet[2615]: E0129 11:09:49.330655 2615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a85546d-d1a8-4f00-bee2-692cea05a194-cilium-config-path podName:4a85546d-d1a8-4f00-bee2-692cea05a194 nodeName:}" failed. No retries permitted until 2025-01-29 11:09:49.830631652 +0000 UTC m=+6.305903300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4a85546d-d1a8-4f00-bee2-692cea05a194-cilium-config-path") pod "cilium-operator-5d85765b45-xpvhq" (UID: "4a85546d-d1a8-4f00-bee2-692cea05a194") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:09:49.632654 kubelet[2615]: E0129 11:09:49.632528 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:49.682972 kubelet[2615]: I0129 11:09:49.682904 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4kwmb" podStartSLOduration=2.682880817 podStartE2EDuration="2.682880817s" podCreationTimestamp="2025-01-29 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:09:49.68254882 +0000 UTC m=+6.157820469" watchObservedRunningTime="2025-01-29 11:09:49.682880817 +0000 UTC m=+6.158152465" Jan 29 11:09:49.719221 kubelet[2615]: E0129 11:09:49.719185 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:49.832543 kubelet[2615]: E0129 11:09:49.832487 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:49.833077 containerd[1488]: time="2025-01-29T11:09:49.833028298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gclzx,Uid:fbe8f77f-9f94-4f7a-bbb0-d865a937b584,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:49.962111 containerd[1488]: time="2025-01-29T11:09:49.961878918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:49.962111 containerd[1488]: time="2025-01-29T11:09:49.962004294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:49.962111 containerd[1488]: time="2025-01-29T11:09:49.962027918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:49.962305 containerd[1488]: time="2025-01-29T11:09:49.962148235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:49.996041 systemd[1]: Started cri-containerd-5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7.scope - libcontainer container 5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7. Jan 29 11:09:50.002216 kubelet[2615]: E0129 11:09:50.002179 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:50.004436 containerd[1488]: time="2025-01-29T11:09:50.004385004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xpvhq,Uid:4a85546d-d1a8-4f00-bee2-692cea05a194,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:50.021314 containerd[1488]: time="2025-01-29T11:09:50.021262384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gclzx,Uid:fbe8f77f-9f94-4f7a-bbb0-d865a937b584,Namespace:kube-system,Attempt:0,} returns sandbox id \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\"" Jan 29 11:09:50.022398 kubelet[2615]: E0129 11:09:50.022347 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:50.024345 containerd[1488]: time="2025-01-29T11:09:50.024311440Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:09:50.033573 containerd[1488]: time="2025-01-29T11:09:50.033462497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:50.033573 containerd[1488]: time="2025-01-29T11:09:50.033543961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:50.033737 containerd[1488]: time="2025-01-29T11:09:50.033580430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:50.033737 containerd[1488]: time="2025-01-29T11:09:50.033690767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:50.053890 systemd[1]: Started cri-containerd-821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4.scope - libcontainer container 821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4. Jan 29 11:09:50.092638 containerd[1488]: time="2025-01-29T11:09:50.092593874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xpvhq,Uid:4a85546d-d1a8-4f00-bee2-692cea05a194,Namespace:kube-system,Attempt:0,} returns sandbox id \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\"" Jan 29 11:09:50.093513 kubelet[2615]: E0129 11:09:50.093485 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:50.635279 kubelet[2615]: E0129 11:09:50.635240 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:53.373500 kubelet[2615]: E0129 11:09:53.373447 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:53.639727 kubelet[2615]: E0129 11:09:53.639601 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:54.129683 kubelet[2615]: E0129 11:09:54.129651 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:59.936054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201772503.mount: Deactivated successfully. Jan 29 11:10:02.582375 containerd[1488]: time="2025-01-29T11:10:02.582321334Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:02.583400 containerd[1488]: time="2025-01-29T11:10:02.583343055Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:10:02.584637 containerd[1488]: time="2025-01-29T11:10:02.584611529Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:02.586393 containerd[1488]: time="2025-01-29T11:10:02.586350838Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.56200319s" Jan 29 11:10:02.586457 containerd[1488]: time="2025-01-29T11:10:02.586396645Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:10:02.591434 containerd[1488]: time="2025-01-29T11:10:02.591408314Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:10:02.603838 containerd[1488]: time="2025-01-29T11:10:02.603684450Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:10:02.627455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078280261.mount: Deactivated successfully. Jan 29 11:10:02.628388 containerd[1488]: time="2025-01-29T11:10:02.628348530Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\"" Jan 29 11:10:02.631444 containerd[1488]: time="2025-01-29T11:10:02.631418792Z" level=info msg="StartContainer for \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\"" Jan 29 11:10:02.672054 systemd[1]: Started cri-containerd-68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528.scope - libcontainer container 68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528. Jan 29 11:10:02.702450 containerd[1488]: time="2025-01-29T11:10:02.702392660Z" level=info msg="StartContainer for \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\" returns successfully" Jan 29 11:10:02.713072 systemd[1]: cri-containerd-68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528.scope: Deactivated successfully. Jan 29 11:10:03.291524 containerd[1488]: time="2025-01-29T11:10:03.291445021Z" level=info msg="shim disconnected" id=68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528 namespace=k8s.io Jan 29 11:10:03.291524 containerd[1488]: time="2025-01-29T11:10:03.291504664Z" level=warning msg="cleaning up after shim disconnected" id=68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528 namespace=k8s.io Jan 29 11:10:03.291524 containerd[1488]: time="2025-01-29T11:10:03.291514593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:03.624745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528-rootfs.mount: Deactivated successfully. Jan 29 11:10:03.662673 kubelet[2615]: E0129 11:10:03.662645 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:03.664135 containerd[1488]: time="2025-01-29T11:10:03.664103308Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:10:04.378717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503594996.mount: Deactivated successfully. Jan 29 11:10:04.621169 containerd[1488]: time="2025-01-29T11:10:04.621087355Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\"" Jan 29 11:10:04.622820 containerd[1488]: time="2025-01-29T11:10:04.621751833Z" level=info msg="StartContainer for \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\"" Jan 29 11:10:04.657095 systemd[1]: Started cri-containerd-51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522.scope - libcontainer container 51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522. Jan 29 11:10:04.692444 containerd[1488]: time="2025-01-29T11:10:04.692375866Z" level=info msg="StartContainer for \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\" returns successfully" Jan 29 11:10:04.704101 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:10:04.704435 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:10:04.704521 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:10:04.713834 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:10:04.714142 systemd[1]: cri-containerd-51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522.scope: Deactivated successfully. Jan 29 11:10:04.739438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522-rootfs.mount: Deactivated successfully. Jan 29 11:10:04.815112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:10:05.010809 containerd[1488]: time="2025-01-29T11:10:05.010573238Z" level=info msg="shim disconnected" id=51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522 namespace=k8s.io Jan 29 11:10:05.010809 containerd[1488]: time="2025-01-29T11:10:05.010633281Z" level=warning msg="cleaning up after shim disconnected" id=51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522 namespace=k8s.io Jan 29 11:10:05.010809 containerd[1488]: time="2025-01-29T11:10:05.010642569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:05.025123 containerd[1488]: time="2025-01-29T11:10:05.025049568Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:10:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:10:05.634870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989619072.mount: Deactivated successfully. Jan 29 11:10:05.669325 kubelet[2615]: E0129 11:10:05.669286 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:05.670946 containerd[1488]: time="2025-01-29T11:10:05.670893045Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:10:05.737746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714485725.mount: Deactivated successfully. Jan 29 11:10:05.744202 containerd[1488]: time="2025-01-29T11:10:05.744153094Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\"" Jan 29 11:10:05.745084 containerd[1488]: time="2025-01-29T11:10:05.744881743Z" level=info msg="StartContainer for \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\"" Jan 29 11:10:05.777973 systemd[1]: Started cri-containerd-3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0.scope - libcontainer container 3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0. Jan 29 11:10:05.820364 systemd[1]: cri-containerd-3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0.scope: Deactivated successfully. Jan 29 11:10:05.821074 containerd[1488]: time="2025-01-29T11:10:05.821029949Z" level=info msg="StartContainer for \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\" returns successfully" Jan 29 11:10:05.907915 containerd[1488]: time="2025-01-29T11:10:05.907669357Z" level=info msg="shim disconnected" id=3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0 namespace=k8s.io Jan 29 11:10:05.907915 containerd[1488]: time="2025-01-29T11:10:05.907729541Z" level=warning msg="cleaning up after shim disconnected" id=3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0 namespace=k8s.io Jan 29 11:10:05.907915 containerd[1488]: time="2025-01-29T11:10:05.907738378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:06.053422 containerd[1488]: time="2025-01-29T11:10:06.053368664Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:06.054031 containerd[1488]: time="2025-01-29T11:10:06.053957419Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:10:06.055090 containerd[1488]: time="2025-01-29T11:10:06.055044060Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:06.056393 containerd[1488]: time="2025-01-29T11:10:06.056360294Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.464918645s" Jan 29 11:10:06.056459 containerd[1488]: time="2025-01-29T11:10:06.056393386Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:10:06.058645 containerd[1488]: time="2025-01-29T11:10:06.058616492Z" level=info msg="CreateContainer within sandbox \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:10:06.071312 containerd[1488]: time="2025-01-29T11:10:06.071231612Z" level=info msg="CreateContainer within sandbox \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\"" Jan 29 11:10:06.071889 containerd[1488]: time="2025-01-29T11:10:06.071837891Z" level=info msg="StartContainer for \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\"" Jan 29 11:10:06.102990 systemd[1]: Started cri-containerd-231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2.scope - libcontainer container 231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2. Jan 29 11:10:06.133074 containerd[1488]: time="2025-01-29T11:10:06.133018721Z" level=info msg="StartContainer for \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\" returns successfully" Jan 29 11:10:06.637366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0-rootfs.mount: Deactivated successfully. Jan 29 11:10:06.674293 kubelet[2615]: E0129 11:10:06.674253 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:06.676992 containerd[1488]: time="2025-01-29T11:10:06.676950022Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:10:06.688606 kubelet[2615]: E0129 11:10:06.688573 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:06.879141 containerd[1488]: time="2025-01-29T11:10:06.879056829Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\"" Jan 29 11:10:06.883734 kubelet[2615]: I0129 11:10:06.883558 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xpvhq" podStartSLOduration=2.920549131 podStartE2EDuration="18.88353349s" podCreationTimestamp="2025-01-29 11:09:48 +0000 UTC" firstStartedPulling="2025-01-29 11:09:50.094250547 +0000 UTC m=+6.569522195" lastFinishedPulling="2025-01-29 11:10:06.057234906 +0000 UTC m=+22.532506554" observedRunningTime="2025-01-29 11:10:06.88333162 +0000 UTC m=+23.358603268" watchObservedRunningTime="2025-01-29 11:10:06.88353349 +0000 UTC m=+23.358805148" Jan 29 11:10:06.886087 containerd[1488]: time="2025-01-29T11:10:06.886026904Z" level=info msg="StartContainer for \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\"" Jan 29 11:10:06.939958 systemd[1]: Started cri-containerd-1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777.scope - libcontainer container 1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777. Jan 29 11:10:07.015491 systemd[1]: cri-containerd-1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777.scope: Deactivated successfully. Jan 29 11:10:07.048096 containerd[1488]: time="2025-01-29T11:10:07.047929700Z" level=info msg="StartContainer for \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\" returns successfully" Jan 29 11:10:07.073603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777-rootfs.mount: Deactivated successfully. Jan 29 11:10:07.276344 containerd[1488]: time="2025-01-29T11:10:07.276062564Z" level=info msg="shim disconnected" id=1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777 namespace=k8s.io Jan 29 11:10:07.276344 containerd[1488]: time="2025-01-29T11:10:07.276179605Z" level=warning msg="cleaning up after shim disconnected" id=1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777 namespace=k8s.io Jan 29 11:10:07.276344 containerd[1488]: time="2025-01-29T11:10:07.276216164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:07.679820 kubelet[2615]: E0129 11:10:07.679749 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:07.679820 kubelet[2615]: E0129 11:10:07.679826 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:07.682631 containerd[1488]: time="2025-01-29T11:10:07.682380968Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:10:07.710733 containerd[1488]: time="2025-01-29T11:10:07.710685718Z" level=info msg="CreateContainer within sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\"" Jan 29 11:10:07.711340 containerd[1488]: time="2025-01-29T11:10:07.711173805Z" level=info msg="StartContainer for \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\"" Jan 29 11:10:07.766067 systemd[1]: Started cri-containerd-80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067.scope - libcontainer container 80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067. Jan 29 11:10:07.801634 containerd[1488]: time="2025-01-29T11:10:07.801577858Z" level=info msg="StartContainer for \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\" returns successfully" Jan 29 11:10:07.958500 kubelet[2615]: I0129 11:10:07.957258 2615 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:10:08.030049 systemd[1]: Created slice kubepods-burstable-pod13478adf_a8ed_4fc6_9ff7_016ebb2df611.slice - libcontainer container kubepods-burstable-pod13478adf_a8ed_4fc6_9ff7_016ebb2df611.slice. Jan 29 11:10:08.054557 kubelet[2615]: I0129 11:10:08.054514 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13478adf-a8ed-4fc6-9ff7-016ebb2df611-config-volume\") pod \"coredns-6f6b679f8f-ntl6k\" (UID: \"13478adf-a8ed-4fc6-9ff7-016ebb2df611\") " pod="kube-system/coredns-6f6b679f8f-ntl6k" Jan 29 11:10:08.054730 kubelet[2615]: I0129 11:10:08.054564 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vr9p\" (UniqueName: \"kubernetes.io/projected/13478adf-a8ed-4fc6-9ff7-016ebb2df611-kube-api-access-5vr9p\") pod \"coredns-6f6b679f8f-ntl6k\" (UID: \"13478adf-a8ed-4fc6-9ff7-016ebb2df611\") " pod="kube-system/coredns-6f6b679f8f-ntl6k" Jan 29 11:10:08.155037 kubelet[2615]: I0129 11:10:08.155003 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2fkt\" (UniqueName: \"kubernetes.io/projected/5e5a9b87-91b3-456a-a200-b8d35f441232-kube-api-access-m2fkt\") pod \"coredns-6f6b679f8f-pvbtw\" (UID: \"5e5a9b87-91b3-456a-a200-b8d35f441232\") " pod="kube-system/coredns-6f6b679f8f-pvbtw" Jan 29 11:10:08.155037 kubelet[2615]: I0129 11:10:08.155043 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e5a9b87-91b3-456a-a200-b8d35f441232-config-volume\") pod \"coredns-6f6b679f8f-pvbtw\" (UID: \"5e5a9b87-91b3-456a-a200-b8d35f441232\") " pod="kube-system/coredns-6f6b679f8f-pvbtw" Jan 29 11:10:08.155411 systemd[1]: Created slice kubepods-burstable-pod5e5a9b87_91b3_456a_a200_b8d35f441232.slice - libcontainer container kubepods-burstable-pod5e5a9b87_91b3_456a_a200_b8d35f441232.slice. Jan 29 11:10:08.459312 kubelet[2615]: E0129 11:10:08.459274 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:08.473472 containerd[1488]: time="2025-01-29T11:10:08.473428127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pvbtw,Uid:5e5a9b87-91b3-456a-a200-b8d35f441232,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:08.634974 kubelet[2615]: E0129 11:10:08.633275 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:08.635830 containerd[1488]: time="2025-01-29T11:10:08.635775994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ntl6k,Uid:13478adf-a8ed-4fc6-9ff7-016ebb2df611,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:08.692612 kubelet[2615]: E0129 11:10:08.692582 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:08.714724 kubelet[2615]: I0129 11:10:08.712466 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gclzx" podStartSLOduration=9.144779583 podStartE2EDuration="21.712443467s" podCreationTimestamp="2025-01-29 11:09:47 +0000 UTC" firstStartedPulling="2025-01-29 11:09:50.02358978 +0000 UTC m=+6.498861428" lastFinishedPulling="2025-01-29 11:10:02.591253664 +0000 UTC m=+19.066525312" observedRunningTime="2025-01-29 11:10:08.708442251 +0000 UTC m=+25.183713900" watchObservedRunningTime="2025-01-29 11:10:08.712443467 +0000 UTC m=+25.187715115" Jan 29 11:10:09.693887 kubelet[2615]: E0129 11:10:09.693857 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:10.392962 systemd-networkd[1420]: cilium_host: Link UP Jan 29 11:10:10.393210 systemd-networkd[1420]: cilium_net: Link UP Jan 29 11:10:10.393216 systemd-networkd[1420]: cilium_net: Gained carrier Jan 29 11:10:10.393476 systemd-networkd[1420]: cilium_host: Gained carrier Jan 29 11:10:10.395855 systemd-networkd[1420]: cilium_host: Gained IPv6LL Jan 29 11:10:10.500387 systemd-networkd[1420]: cilium_vxlan: Link UP Jan 29 11:10:10.500399 systemd-networkd[1420]: cilium_vxlan: Gained carrier Jan 29 11:10:10.696018 kubelet[2615]: E0129 11:10:10.695891 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:10.711807 kernel: NET: Registered PF_ALG protocol family Jan 29 11:10:11.251944 systemd-networkd[1420]: cilium_net: Gained IPv6LL Jan 29 11:10:11.383033 systemd-networkd[1420]: lxc_health: Link UP Jan 29 11:10:11.395423 systemd-networkd[1420]: lxc_health: Gained carrier Jan 29 11:10:11.648398 systemd-networkd[1420]: lxcf133a7aa56dc: Link UP Jan 29 11:10:11.657955 kernel: eth0: renamed from tmpff59e Jan 29 11:10:11.662083 systemd-networkd[1420]: lxcf133a7aa56dc: Gained carrier Jan 29 11:10:11.697534 kubelet[2615]: E0129 11:10:11.697458 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:11.866558 systemd-networkd[1420]: lxc9575d789e9f0: Link UP Jan 29 11:10:11.881800 kernel: eth0: renamed from tmpd544b Jan 29 11:10:11.888920 systemd-networkd[1420]: lxc9575d789e9f0: Gained carrier Jan 29 11:10:12.404009 systemd-networkd[1420]: cilium_vxlan: Gained IPv6LL Jan 29 11:10:12.531960 systemd-networkd[1420]: lxc_health: Gained IPv6LL Jan 29 11:10:12.699280 kubelet[2615]: E0129 11:10:12.699058 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:12.980061 systemd-networkd[1420]: lxc9575d789e9f0: Gained IPv6LL Jan 29 11:10:12.980436 systemd-networkd[1420]: lxcf133a7aa56dc: Gained IPv6LL Jan 29 11:10:13.700375 kubelet[2615]: E0129 11:10:13.700339 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:15.358325 containerd[1488]: time="2025-01-29T11:10:15.358232347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:15.358325 containerd[1488]: time="2025-01-29T11:10:15.358276039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:15.358325 containerd[1488]: time="2025-01-29T11:10:15.358286008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:15.358874 containerd[1488]: time="2025-01-29T11:10:15.358355398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:15.395952 systemd[1]: Started cri-containerd-d544b8dc62df28340a1ebaae91406fb08d4f341665304329ff778bf90bf86c88.scope - libcontainer container d544b8dc62df28340a1ebaae91406fb08d4f341665304329ff778bf90bf86c88. Jan 29 11:10:15.410726 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:10:15.432155 containerd[1488]: time="2025-01-29T11:10:15.431942768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:15.432155 containerd[1488]: time="2025-01-29T11:10:15.432042806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:15.432155 containerd[1488]: time="2025-01-29T11:10:15.432076439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:15.432482 containerd[1488]: time="2025-01-29T11:10:15.432205040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:15.463290 systemd[1]: Started cri-containerd-ff59eedf7a524fcbb0daf59cf4d629e5c568ff05279e444c69951fd9723bea76.scope - libcontainer container ff59eedf7a524fcbb0daf59cf4d629e5c568ff05279e444c69951fd9723bea76. Jan 29 11:10:15.463568 containerd[1488]: time="2025-01-29T11:10:15.463424946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ntl6k,Uid:13478adf-a8ed-4fc6-9ff7-016ebb2df611,Namespace:kube-system,Attempt:0,} returns sandbox id \"d544b8dc62df28340a1ebaae91406fb08d4f341665304329ff778bf90bf86c88\"" Jan 29 11:10:15.464127 kubelet[2615]: E0129 11:10:15.464090 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:15.476576 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:10:15.476736 containerd[1488]: time="2025-01-29T11:10:15.476708964Z" level=info msg="CreateContainer within sandbox \"d544b8dc62df28340a1ebaae91406fb08d4f341665304329ff778bf90bf86c88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:10:15.498165 containerd[1488]: time="2025-01-29T11:10:15.498130306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pvbtw,Uid:5e5a9b87-91b3-456a-a200-b8d35f441232,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff59eedf7a524fcbb0daf59cf4d629e5c568ff05279e444c69951fd9723bea76\"" Jan 29 11:10:15.498995 kubelet[2615]: E0129 11:10:15.498967 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:15.500297 containerd[1488]: time="2025-01-29T11:10:15.500275824Z" level=info msg="CreateContainer within sandbox \"ff59eedf7a524fcbb0daf59cf4d629e5c568ff05279e444c69951fd9723bea76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:10:16.362018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402094742.mount: Deactivated successfully. Jan 29 11:10:16.622219 containerd[1488]: time="2025-01-29T11:10:16.622105688Z" level=info msg="CreateContainer within sandbox \"d544b8dc62df28340a1ebaae91406fb08d4f341665304329ff778bf90bf86c88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e0e1405097d399225b72d68e9b3794576738c601bebef10267e7b8af6078f60\"" Jan 29 11:10:16.622893 containerd[1488]: time="2025-01-29T11:10:16.622848683Z" level=info msg="StartContainer for \"3e0e1405097d399225b72d68e9b3794576738c601bebef10267e7b8af6078f60\"" Jan 29 11:10:16.641242 containerd[1488]: time="2025-01-29T11:10:16.641202475Z" level=info msg="CreateContainer within sandbox \"ff59eedf7a524fcbb0daf59cf4d629e5c568ff05279e444c69951fd9723bea76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93075e6fcff151ca5ca99737a22a944a17558c2a40cbcd7f6d1bbf6c4fe96c20\"" Jan 29 11:10:16.644027 containerd[1488]: time="2025-01-29T11:10:16.643061735Z" level=info msg="StartContainer for \"93075e6fcff151ca5ca99737a22a944a17558c2a40cbcd7f6d1bbf6c4fe96c20\"" Jan 29 11:10:16.657987 systemd[1]: Started cri-containerd-3e0e1405097d399225b72d68e9b3794576738c601bebef10267e7b8af6078f60.scope - libcontainer container 3e0e1405097d399225b72d68e9b3794576738c601bebef10267e7b8af6078f60. Jan 29 11:10:16.674940 systemd[1]: Started cri-containerd-93075e6fcff151ca5ca99737a22a944a17558c2a40cbcd7f6d1bbf6c4fe96c20.scope - libcontainer container 93075e6fcff151ca5ca99737a22a944a17558c2a40cbcd7f6d1bbf6c4fe96c20. Jan 29 11:10:16.932258 containerd[1488]: time="2025-01-29T11:10:16.932066578Z" level=info msg="StartContainer for \"3e0e1405097d399225b72d68e9b3794576738c601bebef10267e7b8af6078f60\" returns successfully" Jan 29 11:10:16.932258 containerd[1488]: time="2025-01-29T11:10:16.932066478Z" level=info msg="StartContainer for \"93075e6fcff151ca5ca99737a22a944a17558c2a40cbcd7f6d1bbf6c4fe96c20\" returns successfully" Jan 29 11:10:16.937081 kubelet[2615]: E0129 11:10:16.937028 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:17.092642 kubelet[2615]: I0129 11:10:17.092558 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ntl6k" podStartSLOduration=29.092534384 podStartE2EDuration="29.092534384s" podCreationTimestamp="2025-01-29 11:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:17.092438122 +0000 UTC m=+33.567709770" watchObservedRunningTime="2025-01-29 11:10:17.092534384 +0000 UTC m=+33.567806042" Jan 29 11:10:17.187490 systemd[1]: Started sshd@8-10.0.0.46:22-10.0.0.1:37374.service - OpenSSH per-connection server daemon (10.0.0.1:37374). Jan 29 11:10:17.257110 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 37374 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:17.259291 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:17.267270 systemd-logind[1471]: New session 8 of user core. Jan 29 11:10:17.282010 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:10:17.442513 sshd[4005]: Connection closed by 10.0.0.1 port 37374 Jan 29 11:10:17.442860 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:17.447197 systemd[1]: sshd@8-10.0.0.46:22-10.0.0.1:37374.service: Deactivated successfully. Jan 29 11:10:17.449299 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:10:17.450295 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:10:17.451218 systemd-logind[1471]: Removed session 8. Jan 29 11:10:17.938444 kubelet[2615]: E0129 11:10:17.938369 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:17.939693 kubelet[2615]: E0129 11:10:17.938481 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:17.947643 kubelet[2615]: I0129 11:10:17.947585 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pvbtw" podStartSLOduration=29.94756852 podStartE2EDuration="29.94756852s" podCreationTimestamp="2025-01-29 11:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:17.947070856 +0000 UTC m=+34.422342504" watchObservedRunningTime="2025-01-29 11:10:17.94756852 +0000 UTC m=+34.422840168" Jan 29 11:10:18.940136 kubelet[2615]: E0129 11:10:18.940101 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:18.940136 kubelet[2615]: E0129 11:10:18.940107 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:19.941371 kubelet[2615]: E0129 11:10:19.941328 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:22.458179 systemd[1]: Started sshd@9-10.0.0.46:22-10.0.0.1:56270.service - OpenSSH per-connection server daemon (10.0.0.1:56270). Jan 29 11:10:22.497173 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 56270 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:22.498651 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:22.502552 systemd-logind[1471]: New session 9 of user core. Jan 29 11:10:22.508887 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:10:22.620549 sshd[4041]: Connection closed by 10.0.0.1 port 56270 Jan 29 11:10:22.620999 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:22.624590 systemd[1]: sshd@9-10.0.0.46:22-10.0.0.1:56270.service: Deactivated successfully. Jan 29 11:10:22.626384 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:10:22.627065 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:10:22.627917 systemd-logind[1471]: Removed session 9. Jan 29 11:10:27.632192 systemd[1]: Started sshd@10-10.0.0.46:22-10.0.0.1:56794.service - OpenSSH per-connection server daemon (10.0.0.1:56794). Jan 29 11:10:27.715585 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 56794 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:27.717935 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:27.723002 systemd-logind[1471]: New session 10 of user core. Jan 29 11:10:27.743070 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:10:27.871413 sshd[4058]: Connection closed by 10.0.0.1 port 56794 Jan 29 11:10:27.872632 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:27.879726 systemd[1]: sshd@10-10.0.0.46:22-10.0.0.1:56794.service: Deactivated successfully. Jan 29 11:10:27.882533 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:10:27.883462 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:10:27.885651 systemd-logind[1471]: Removed session 10. Jan 29 11:10:32.911463 systemd[1]: Started sshd@11-10.0.0.46:22-10.0.0.1:56800.service - OpenSSH per-connection server daemon (10.0.0.1:56800). Jan 29 11:10:33.027305 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 56800 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:33.029676 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:33.050791 systemd-logind[1471]: New session 11 of user core. Jan 29 11:10:33.062276 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:10:33.255930 sshd[4073]: Connection closed by 10.0.0.1 port 56800 Jan 29 11:10:33.256824 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:33.264910 systemd[1]: sshd@11-10.0.0.46:22-10.0.0.1:56800.service: Deactivated successfully. Jan 29 11:10:33.268667 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:10:33.272927 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:10:33.278220 systemd-logind[1471]: Removed session 11. Jan 29 11:10:38.269321 systemd[1]: Started sshd@12-10.0.0.46:22-10.0.0.1:36568.service - OpenSSH per-connection server daemon (10.0.0.1:36568). Jan 29 11:10:38.311399 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 36568 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:38.313173 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:38.325394 systemd-logind[1471]: New session 12 of user core. Jan 29 11:10:38.336035 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:10:38.465524 sshd[4088]: Connection closed by 10.0.0.1 port 36568 Jan 29 11:10:38.465904 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:38.480253 systemd[1]: sshd@12-10.0.0.46:22-10.0.0.1:36568.service: Deactivated successfully. Jan 29 11:10:38.482870 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:10:38.484816 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:10:38.497323 systemd[1]: Started sshd@13-10.0.0.46:22-10.0.0.1:36584.service - OpenSSH per-connection server daemon (10.0.0.1:36584). Jan 29 11:10:38.499613 systemd-logind[1471]: Removed session 12. Jan 29 11:10:38.539334 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 36584 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:38.541128 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:38.546756 systemd-logind[1471]: New session 13 of user core. Jan 29 11:10:38.556015 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:10:38.724291 sshd[4103]: Connection closed by 10.0.0.1 port 36584 Jan 29 11:10:38.727420 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:38.733566 systemd[1]: sshd@13-10.0.0.46:22-10.0.0.1:36584.service: Deactivated successfully. Jan 29 11:10:38.736888 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:10:38.739314 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:10:38.748223 systemd[1]: Started sshd@14-10.0.0.46:22-10.0.0.1:36592.service - OpenSSH per-connection server daemon (10.0.0.1:36592). Jan 29 11:10:38.749640 systemd-logind[1471]: Removed session 13. Jan 29 11:10:38.801007 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 36592 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:38.802713 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:38.807880 systemd-logind[1471]: New session 14 of user core. Jan 29 11:10:38.818967 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:10:38.965965 sshd[4116]: Connection closed by 10.0.0.1 port 36592 Jan 29 11:10:38.966326 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:38.970834 systemd[1]: sshd@14-10.0.0.46:22-10.0.0.1:36592.service: Deactivated successfully. Jan 29 11:10:38.973427 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:10:38.975700 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:10:38.977278 systemd-logind[1471]: Removed session 14. Jan 29 11:10:43.982412 systemd[1]: Started sshd@15-10.0.0.46:22-10.0.0.1:36606.service - OpenSSH per-connection server daemon (10.0.0.1:36606). Jan 29 11:10:44.028055 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 36606 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:44.029996 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:44.035183 systemd-logind[1471]: New session 15 of user core. Jan 29 11:10:44.043050 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:10:44.170360 sshd[4134]: Connection closed by 10.0.0.1 port 36606 Jan 29 11:10:44.170729 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:44.175893 systemd[1]: sshd@15-10.0.0.46:22-10.0.0.1:36606.service: Deactivated successfully. Jan 29 11:10:44.178414 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:10:44.179296 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:10:44.181129 systemd-logind[1471]: Removed session 15. Jan 29 11:10:49.182182 systemd[1]: Started sshd@16-10.0.0.46:22-10.0.0.1:49348.service - OpenSSH per-connection server daemon (10.0.0.1:49348). Jan 29 11:10:49.227679 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 49348 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:49.229599 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:49.234033 systemd-logind[1471]: New session 16 of user core. Jan 29 11:10:49.243910 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:10:49.365319 sshd[4152]: Connection closed by 10.0.0.1 port 49348 Jan 29 11:10:49.365745 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:49.370453 systemd[1]: sshd@16-10.0.0.46:22-10.0.0.1:49348.service: Deactivated successfully. Jan 29 11:10:49.372671 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:10:49.373373 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:10:49.374374 systemd-logind[1471]: Removed session 16. Jan 29 11:10:54.411656 systemd[1]: Started sshd@17-10.0.0.46:22-10.0.0.1:49356.service - OpenSSH per-connection server daemon (10.0.0.1:49356). Jan 29 11:10:54.470470 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 49356 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:54.472843 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:54.484264 systemd-logind[1471]: New session 17 of user core. Jan 29 11:10:54.502128 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:10:54.682083 sshd[4166]: Connection closed by 10.0.0.1 port 49356 Jan 29 11:10:54.682980 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:54.703095 systemd[1]: sshd@17-10.0.0.46:22-10.0.0.1:49356.service: Deactivated successfully. Jan 29 11:10:54.708091 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:10:54.710422 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:10:54.729683 systemd[1]: Started sshd@18-10.0.0.46:22-10.0.0.1:49360.service - OpenSSH per-connection server daemon (10.0.0.1:49360). Jan 29 11:10:54.745525 systemd-logind[1471]: Removed session 17. Jan 29 11:10:54.804346 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 49360 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:54.805287 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:54.816892 systemd-logind[1471]: New session 18 of user core. Jan 29 11:10:54.841779 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:10:55.305498 sshd[4180]: Connection closed by 10.0.0.1 port 49360 Jan 29 11:10:55.307225 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:55.317656 systemd[1]: sshd@18-10.0.0.46:22-10.0.0.1:49360.service: Deactivated successfully. Jan 29 11:10:55.319678 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:10:55.321623 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:10:55.329119 systemd[1]: Started sshd@19-10.0.0.46:22-10.0.0.1:49372.service - OpenSSH per-connection server daemon (10.0.0.1:49372). Jan 29 11:10:55.330498 systemd-logind[1471]: Removed session 18. Jan 29 11:10:55.368791 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 49372 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:55.370566 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:55.376152 systemd-logind[1471]: New session 19 of user core. Jan 29 11:10:55.385952 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:10:57.129335 sshd[4193]: Connection closed by 10.0.0.1 port 49372 Jan 29 11:10:57.132864 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:57.138236 systemd[1]: sshd@19-10.0.0.46:22-10.0.0.1:49372.service: Deactivated successfully. Jan 29 11:10:57.140237 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:10:57.141033 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:10:57.152119 systemd[1]: Started sshd@20-10.0.0.46:22-10.0.0.1:49376.service - OpenSSH per-connection server daemon (10.0.0.1:49376). Jan 29 11:10:57.152832 systemd-logind[1471]: Removed session 19. Jan 29 11:10:57.198086 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 49376 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:57.200083 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:57.204374 systemd-logind[1471]: New session 20 of user core. Jan 29 11:10:57.213968 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:10:57.475470 sshd[4212]: Connection closed by 10.0.0.1 port 49376 Jan 29 11:10:57.477638 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:57.489406 systemd[1]: sshd@20-10.0.0.46:22-10.0.0.1:49376.service: Deactivated successfully. Jan 29 11:10:57.491413 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:10:57.493099 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:10:57.494424 systemd[1]: Started sshd@21-10.0.0.46:22-10.0.0.1:56454.service - OpenSSH per-connection server daemon (10.0.0.1:56454). Jan 29 11:10:57.495336 systemd-logind[1471]: Removed session 20. Jan 29 11:10:57.534511 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 56454 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:10:57.536168 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:57.540839 systemd-logind[1471]: New session 21 of user core. Jan 29 11:10:57.551006 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:10:57.612718 kubelet[2615]: E0129 11:10:57.612672 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:10:57.670148 sshd[4225]: Connection closed by 10.0.0.1 port 56454 Jan 29 11:10:57.670545 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:57.674941 systemd[1]: sshd@21-10.0.0.46:22-10.0.0.1:56454.service: Deactivated successfully. Jan 29 11:10:57.677528 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:10:57.678295 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:10:57.679370 systemd-logind[1471]: Removed session 21. Jan 29 11:11:02.681611 systemd[1]: Started sshd@22-10.0.0.46:22-10.0.0.1:56468.service - OpenSSH per-connection server daemon (10.0.0.1:56468). Jan 29 11:11:02.719847 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 56468 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:02.721500 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:02.726223 systemd-logind[1471]: New session 22 of user core. Jan 29 11:11:02.739975 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:11:02.859239 sshd[4239]: Connection closed by 10.0.0.1 port 56468 Jan 29 11:11:02.859631 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:02.864226 systemd[1]: sshd@22-10.0.0.46:22-10.0.0.1:56468.service: Deactivated successfully. Jan 29 11:11:02.867202 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:11:02.868030 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:11:02.869161 systemd-logind[1471]: Removed session 22. Jan 29 11:11:04.610583 kubelet[2615]: E0129 11:11:04.610451 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:04.611118 kubelet[2615]: E0129 11:11:04.610647 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:07.870567 systemd[1]: Started sshd@23-10.0.0.46:22-10.0.0.1:46990.service - OpenSSH per-connection server daemon (10.0.0.1:46990). Jan 29 11:11:07.910485 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 46990 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:07.912159 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:07.916599 systemd-logind[1471]: New session 23 of user core. Jan 29 11:11:07.922959 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:11:08.060998 sshd[4256]: Connection closed by 10.0.0.1 port 46990 Jan 29 11:11:08.061358 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:08.065221 systemd[1]: sshd@23-10.0.0.46:22-10.0.0.1:46990.service: Deactivated successfully. Jan 29 11:11:08.067240 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:11:08.067874 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:11:08.068724 systemd-logind[1471]: Removed session 23. Jan 29 11:11:13.076638 systemd[1]: Started sshd@24-10.0.0.46:22-10.0.0.1:47006.service - OpenSSH per-connection server daemon (10.0.0.1:47006). Jan 29 11:11:13.113707 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 47006 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:13.115167 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:13.118905 systemd-logind[1471]: New session 24 of user core. Jan 29 11:11:13.130899 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:11:13.244805 sshd[4270]: Connection closed by 10.0.0.1 port 47006 Jan 29 11:11:13.245167 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:13.249085 systemd[1]: sshd@24-10.0.0.46:22-10.0.0.1:47006.service: Deactivated successfully. Jan 29 11:11:13.251646 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:11:13.252480 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:11:13.253406 systemd-logind[1471]: Removed session 24. Jan 29 11:11:16.610193 kubelet[2615]: E0129 11:11:16.610141 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:18.261351 systemd[1]: Started sshd@25-10.0.0.46:22-10.0.0.1:59668.service - OpenSSH per-connection server daemon (10.0.0.1:59668). Jan 29 11:11:18.298309 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 59668 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:18.299677 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:18.303496 systemd-logind[1471]: New session 25 of user core. Jan 29 11:11:18.315958 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:11:18.416022 sshd[4285]: Connection closed by 10.0.0.1 port 59668 Jan 29 11:11:18.416358 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:18.419850 systemd[1]: sshd@25-10.0.0.46:22-10.0.0.1:59668.service: Deactivated successfully. Jan 29 11:11:18.421719 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:11:18.422725 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:11:18.423674 systemd-logind[1471]: Removed session 25. Jan 29 11:11:23.433158 systemd[1]: Started sshd@26-10.0.0.46:22-10.0.0.1:59676.service - OpenSSH per-connection server daemon (10.0.0.1:59676). Jan 29 11:11:23.472587 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 59676 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:23.474547 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:23.479112 systemd-logind[1471]: New session 26 of user core. Jan 29 11:11:23.489883 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:11:23.601119 sshd[4301]: Connection closed by 10.0.0.1 port 59676 Jan 29 11:11:23.601535 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:23.611151 kubelet[2615]: E0129 11:11:23.611062 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:23.617265 systemd[1]: sshd@26-10.0.0.46:22-10.0.0.1:59676.service: Deactivated successfully. Jan 29 11:11:23.619326 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:11:23.620878 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:11:23.630395 systemd[1]: Started sshd@27-10.0.0.46:22-10.0.0.1:59680.service - OpenSSH per-connection server daemon (10.0.0.1:59680). Jan 29 11:11:23.631811 systemd-logind[1471]: Removed session 26. Jan 29 11:11:23.666697 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 59680 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:23.668382 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:23.673191 systemd-logind[1471]: New session 27 of user core. Jan 29 11:11:23.685902 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:11:25.075318 containerd[1488]: time="2025-01-29T11:11:25.075249029Z" level=info msg="StopContainer for \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\" with timeout 30 (s)" Jan 29 11:11:25.076104 containerd[1488]: time="2025-01-29T11:11:25.075859274Z" level=info msg="Stop container \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\" with signal terminated" Jan 29 11:11:25.088578 systemd[1]: cri-containerd-231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2.scope: Deactivated successfully. Jan 29 11:11:25.115421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2-rootfs.mount: Deactivated successfully. Jan 29 11:11:25.119547 containerd[1488]: time="2025-01-29T11:11:25.119488837Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:11:25.128205 containerd[1488]: time="2025-01-29T11:11:25.128165820Z" level=info msg="StopContainer for \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\" with timeout 2 (s)" Jan 29 11:11:25.128486 containerd[1488]: time="2025-01-29T11:11:25.128454326Z" level=info msg="Stop container \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\" with signal terminated" Jan 29 11:11:25.135163 systemd-networkd[1420]: lxc_health: Link DOWN Jan 29 11:11:25.135573 systemd-networkd[1420]: lxc_health: Lost carrier Jan 29 11:11:25.162754 systemd[1]: cri-containerd-80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067.scope: Deactivated successfully. Jan 29 11:11:25.163283 systemd[1]: cri-containerd-80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067.scope: Consumed 7.221s CPU time. Jan 29 11:11:25.184607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067-rootfs.mount: Deactivated successfully. Jan 29 11:11:25.261453 containerd[1488]: time="2025-01-29T11:11:25.261363638Z" level=info msg="shim disconnected" id=80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067 namespace=k8s.io Jan 29 11:11:25.261453 containerd[1488]: time="2025-01-29T11:11:25.261435784Z" level=warning msg="cleaning up after shim disconnected" id=80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067 namespace=k8s.io Jan 29 11:11:25.261453 containerd[1488]: time="2025-01-29T11:11:25.261447006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.262268 containerd[1488]: time="2025-01-29T11:11:25.262172237Z" level=info msg="shim disconnected" id=231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2 namespace=k8s.io Jan 29 11:11:25.262593 containerd[1488]: time="2025-01-29T11:11:25.262364431Z" level=warning msg="cleaning up after shim disconnected" id=231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2 namespace=k8s.io Jan 29 11:11:25.262593 containerd[1488]: time="2025-01-29T11:11:25.262382506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.276715 containerd[1488]: time="2025-01-29T11:11:25.276619854Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:11:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:11:25.281652 containerd[1488]: time="2025-01-29T11:11:25.281604012Z" level=info msg="StopContainer for \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\" returns successfully" Jan 29 11:11:25.281808 containerd[1488]: time="2025-01-29T11:11:25.281731984Z" level=info msg="StopContainer for \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\" returns successfully" Jan 29 11:11:25.285544 containerd[1488]: time="2025-01-29T11:11:25.285507476Z" level=info msg="StopPodSandbox for \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\"" Jan 29 11:11:25.285642 containerd[1488]: time="2025-01-29T11:11:25.285547931Z" level=info msg="Container to stop \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.287164 containerd[1488]: time="2025-01-29T11:11:25.287096110Z" level=info msg="StopPodSandbox for \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\"" Jan 29 11:11:25.287313 containerd[1488]: time="2025-01-29T11:11:25.287163357Z" level=info msg="Container to stop \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.287313 containerd[1488]: time="2025-01-29T11:11:25.287221627Z" level=info msg="Container to stop \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.287313 containerd[1488]: time="2025-01-29T11:11:25.287234482Z" level=info msg="Container to stop \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.287313 containerd[1488]: time="2025-01-29T11:11:25.287245984Z" level=info msg="Container to stop \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.287313 containerd[1488]: time="2025-01-29T11:11:25.287256874Z" level=info msg="Container to stop \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.287832 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4-shm.mount: Deactivated successfully. Jan 29 11:11:25.295383 systemd[1]: cri-containerd-5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7.scope: Deactivated successfully. Jan 29 11:11:25.297080 systemd[1]: cri-containerd-821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4.scope: Deactivated successfully. Jan 29 11:11:25.326030 containerd[1488]: time="2025-01-29T11:11:25.325883204Z" level=info msg="shim disconnected" id=5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7 namespace=k8s.io Jan 29 11:11:25.326030 containerd[1488]: time="2025-01-29T11:11:25.325954239Z" level=warning msg="cleaning up after shim disconnected" id=5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7 namespace=k8s.io Jan 29 11:11:25.326030 containerd[1488]: time="2025-01-29T11:11:25.325964478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.329492 containerd[1488]: time="2025-01-29T11:11:25.329425905Z" level=info msg="shim disconnected" id=821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4 namespace=k8s.io Jan 29 11:11:25.329492 containerd[1488]: time="2025-01-29T11:11:25.329465791Z" level=warning msg="cleaning up after shim disconnected" id=821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4 namespace=k8s.io Jan 29 11:11:25.329492 containerd[1488]: time="2025-01-29T11:11:25.329475309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.349474 containerd[1488]: time="2025-01-29T11:11:25.349411247Z" level=info msg="TearDown network for sandbox \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" successfully" Jan 29 11:11:25.349474 containerd[1488]: time="2025-01-29T11:11:25.349454668Z" level=info msg="StopPodSandbox for \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" returns successfully" Jan 29 11:11:25.349665 containerd[1488]: time="2025-01-29T11:11:25.349502118Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:11:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:11:25.351139 containerd[1488]: time="2025-01-29T11:11:25.351102496Z" level=info msg="TearDown network for sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" successfully" Jan 29 11:11:25.351139 containerd[1488]: time="2025-01-29T11:11:25.351126631Z" level=info msg="StopPodSandbox for \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" returns successfully" Jan 29 11:11:25.501309 kubelet[2615]: I0129 11:11:25.501255 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-kernel\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.501309 kubelet[2615]: I0129 11:11:25.501312 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hostproc\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502341 kubelet[2615]: I0129 11:11:25.501342 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfsmx\" (UniqueName: \"kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-kube-api-access-gfsmx\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502341 kubelet[2615]: I0129 11:11:25.501365 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-etc-cni-netd\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502341 kubelet[2615]: I0129 11:11:25.501380 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cni-path\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502341 kubelet[2615]: I0129 11:11:25.501394 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-xtables-lock\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502341 kubelet[2615]: I0129 11:11:25.501413 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpd9h\" (UniqueName: \"kubernetes.io/projected/4a85546d-d1a8-4f00-bee2-692cea05a194-kube-api-access-qpd9h\") pod \"4a85546d-d1a8-4f00-bee2-692cea05a194\" (UID: \"4a85546d-d1a8-4f00-bee2-692cea05a194\") " Jan 29 11:11:25.502341 kubelet[2615]: I0129 11:11:25.501430 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-run\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502512 kubelet[2615]: I0129 11:11:25.501446 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a85546d-d1a8-4f00-bee2-692cea05a194-cilium-config-path\") pod \"4a85546d-d1a8-4f00-bee2-692cea05a194\" (UID: \"4a85546d-d1a8-4f00-bee2-692cea05a194\") " Jan 29 11:11:25.502512 kubelet[2615]: I0129 11:11:25.501434 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.502512 kubelet[2615]: I0129 11:11:25.501487 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.502512 kubelet[2615]: I0129 11:11:25.501507 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.502512 kubelet[2615]: I0129 11:11:25.501530 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.502688 kubelet[2615]: I0129 11:11:25.501488 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.502688 kubelet[2615]: I0129 11:11:25.501441 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hostproc" (OuterVolumeSpecName: "hostproc") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.502688 kubelet[2615]: I0129 11:11:25.501582 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cni-path" (OuterVolumeSpecName: "cni-path") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.502688 kubelet[2615]: I0129 11:11:25.501464 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-net\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502688 kubelet[2615]: I0129 11:11:25.501656 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-clustermesh-secrets\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502874 kubelet[2615]: I0129 11:11:25.501682 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-lib-modules\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502874 kubelet[2615]: I0129 11:11:25.501706 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-config-path\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502874 kubelet[2615]: I0129 11:11:25.501726 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-cgroup\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502874 kubelet[2615]: I0129 11:11:25.501748 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hubble-tls\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502874 kubelet[2615]: I0129 11:11:25.501799 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-bpf-maps\") pod \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\" (UID: \"fbe8f77f-9f94-4f7a-bbb0-d865a937b584\") " Jan 29 11:11:25.502874 kubelet[2615]: I0129 11:11:25.501867 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.502874 kubelet[2615]: I0129 11:11:25.501883 2615 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.503100 kubelet[2615]: I0129 11:11:25.501896 2615 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.503100 kubelet[2615]: I0129 11:11:25.501907 2615 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.503100 kubelet[2615]: I0129 11:11:25.501919 2615 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.503100 kubelet[2615]: I0129 11:11:25.501931 2615 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.503100 kubelet[2615]: I0129 11:11:25.501943 2615 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.503100 kubelet[2615]: I0129 11:11:25.501971 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.506094 kubelet[2615]: I0129 11:11:25.505198 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a85546d-d1a8-4f00-bee2-692cea05a194-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4a85546d-d1a8-4f00-bee2-692cea05a194" (UID: "4a85546d-d1a8-4f00-bee2-692cea05a194"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:11:25.506094 kubelet[2615]: I0129 11:11:25.505248 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.506094 kubelet[2615]: I0129 11:11:25.505287 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:11:25.506213 kubelet[2615]: I0129 11:11:25.506117 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:11:25.506410 kubelet[2615]: I0129 11:11:25.506363 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-kube-api-access-gfsmx" (OuterVolumeSpecName: "kube-api-access-gfsmx") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "kube-api-access-gfsmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:25.508539 kubelet[2615]: I0129 11:11:25.508508 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:25.508593 kubelet[2615]: I0129 11:11:25.508509 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a85546d-d1a8-4f00-bee2-692cea05a194-kube-api-access-qpd9h" (OuterVolumeSpecName: "kube-api-access-qpd9h") pod "4a85546d-d1a8-4f00-bee2-692cea05a194" (UID: "4a85546d-d1a8-4f00-bee2-692cea05a194"). InnerVolumeSpecName "kube-api-access-qpd9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:25.508980 kubelet[2615]: I0129 11:11:25.508958 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fbe8f77f-9f94-4f7a-bbb0-d865a937b584" (UID: "fbe8f77f-9f94-4f7a-bbb0-d865a937b584"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602347 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602385 2615 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602393 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602403 2615 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602411 2615 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602419 2615 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gfsmx\" (UniqueName: \"kubernetes.io/projected/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-kube-api-access-gfsmx\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602430 2615 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qpd9h\" (UniqueName: \"kubernetes.io/projected/4a85546d-d1a8-4f00-bee2-692cea05a194-kube-api-access-qpd9h\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602477 kubelet[2615]: I0129 11:11:25.602438 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a85546d-d1a8-4f00-bee2-692cea05a194-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.602867 kubelet[2615]: I0129 11:11:25.602446 2615 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbe8f77f-9f94-4f7a-bbb0-d865a937b584-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:11:25.617780 systemd[1]: Removed slice kubepods-besteffort-pod4a85546d_d1a8_4f00_bee2_692cea05a194.slice - libcontainer container kubepods-besteffort-pod4a85546d_d1a8_4f00_bee2_692cea05a194.slice. Jan 29 11:11:25.618814 systemd[1]: Removed slice kubepods-burstable-podfbe8f77f_9f94_4f7a_bbb0_d865a937b584.slice - libcontainer container kubepods-burstable-podfbe8f77f_9f94_4f7a_bbb0_d865a937b584.slice. Jan 29 11:11:25.618918 systemd[1]: kubepods-burstable-podfbe8f77f_9f94_4f7a_bbb0_d865a937b584.slice: Consumed 7.339s CPU time. Jan 29 11:11:26.088738 kubelet[2615]: I0129 11:11:26.087374 2615 scope.go:117] "RemoveContainer" containerID="80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067" Jan 29 11:11:26.088903 containerd[1488]: time="2025-01-29T11:11:26.088550494Z" level=info msg="RemoveContainer for \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\"" Jan 29 11:11:26.092428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4-rootfs.mount: Deactivated successfully. Jan 29 11:11:26.092575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7-rootfs.mount: Deactivated successfully. Jan 29 11:11:26.092673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7-shm.mount: Deactivated successfully. Jan 29 11:11:26.092791 systemd[1]: var-lib-kubelet-pods-fbe8f77f\x2d9f94\x2d4f7a\x2dbbb0\x2dd865a937b584-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:11:26.092881 systemd[1]: var-lib-kubelet-pods-4a85546d\x2dd1a8\x2d4f00\x2dbee2\x2d692cea05a194-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqpd9h.mount: Deactivated successfully. Jan 29 11:11:26.092967 systemd[1]: var-lib-kubelet-pods-fbe8f77f\x2d9f94\x2d4f7a\x2dbbb0\x2dd865a937b584-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgfsmx.mount: Deactivated successfully. Jan 29 11:11:26.093064 systemd[1]: var-lib-kubelet-pods-fbe8f77f\x2d9f94\x2d4f7a\x2dbbb0\x2dd865a937b584-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:11:26.098293 containerd[1488]: time="2025-01-29T11:11:26.098228155Z" level=info msg="RemoveContainer for \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\" returns successfully" Jan 29 11:11:26.098662 kubelet[2615]: I0129 11:11:26.098540 2615 scope.go:117] "RemoveContainer" containerID="1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777" Jan 29 11:11:26.100151 containerd[1488]: time="2025-01-29T11:11:26.100097792Z" level=info msg="RemoveContainer for \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\"" Jan 29 11:11:26.106754 containerd[1488]: time="2025-01-29T11:11:26.106684128Z" level=info msg="RemoveContainer for \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\" returns successfully" Jan 29 11:11:26.107050 kubelet[2615]: I0129 11:11:26.107014 2615 scope.go:117] "RemoveContainer" containerID="3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0" Jan 29 11:11:26.109298 containerd[1488]: time="2025-01-29T11:11:26.108973648Z" level=info msg="RemoveContainer for \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\"" Jan 29 11:11:26.115004 containerd[1488]: time="2025-01-29T11:11:26.114954499Z" level=info msg="RemoveContainer for \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\" returns successfully" Jan 29 11:11:26.115351 kubelet[2615]: I0129 11:11:26.115326 2615 scope.go:117] "RemoveContainer" containerID="51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522" Jan 29 11:11:26.116632 containerd[1488]: time="2025-01-29T11:11:26.116587758Z" level=info msg="RemoveContainer for \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\"" Jan 29 11:11:26.210317 containerd[1488]: time="2025-01-29T11:11:26.210272541Z" level=info msg="RemoveContainer for \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\" returns successfully" Jan 29 11:11:26.210582 kubelet[2615]: I0129 11:11:26.210549 2615 scope.go:117] "RemoveContainer" containerID="68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528" Jan 29 11:11:26.211665 containerd[1488]: time="2025-01-29T11:11:26.211616042Z" level=info msg="RemoveContainer for \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\"" Jan 29 11:11:26.302272 containerd[1488]: time="2025-01-29T11:11:26.302209801Z" level=info msg="RemoveContainer for \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\" returns successfully" Jan 29 11:11:26.302521 kubelet[2615]: I0129 11:11:26.302487 2615 scope.go:117] "RemoveContainer" containerID="80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067" Jan 29 11:11:26.302790 containerd[1488]: time="2025-01-29T11:11:26.302739723Z" level=error msg="ContainerStatus for \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\": not found" Jan 29 11:11:26.309202 kubelet[2615]: E0129 11:11:26.309161 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\": not found" containerID="80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067" Jan 29 11:11:26.309306 kubelet[2615]: I0129 11:11:26.309206 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067"} err="failed to get container status \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\": rpc error: code = NotFound desc = an error occurred when try to find container \"80392c633e5d14cea8246d799bd76f8bf6a2a14a57ddee82a16a2ad85c2f2067\": not found" Jan 29 11:11:26.309344 kubelet[2615]: I0129 11:11:26.309307 2615 scope.go:117] "RemoveContainer" containerID="1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777" Jan 29 11:11:26.309518 containerd[1488]: time="2025-01-29T11:11:26.309490009Z" level=error msg="ContainerStatus for \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\": not found" Jan 29 11:11:26.309663 kubelet[2615]: E0129 11:11:26.309617 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\": not found" containerID="1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777" Jan 29 11:11:26.309715 kubelet[2615]: I0129 11:11:26.309655 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777"} err="failed to get container status \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f119913cdd8e23e6f92cd9b171124331427ba7c2dfe4ea8c5180e3064a71777\": not found" Jan 29 11:11:26.309715 kubelet[2615]: I0129 11:11:26.309678 2615 scope.go:117] "RemoveContainer" containerID="3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0" Jan 29 11:11:26.309898 containerd[1488]: time="2025-01-29T11:11:26.309836473Z" level=error msg="ContainerStatus for \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\": not found" Jan 29 11:11:26.309946 kubelet[2615]: E0129 11:11:26.309928 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\": not found" containerID="3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0" Jan 29 11:11:26.309988 kubelet[2615]: I0129 11:11:26.309943 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0"} err="failed to get container status \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ce1ee82667f8b00a0b17546a0507d996c9b64a285d0f178fe22f398e8fd50c0\": not found" Jan 29 11:11:26.309988 kubelet[2615]: I0129 11:11:26.309958 2615 scope.go:117] "RemoveContainer" containerID="51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522" Jan 29 11:11:26.310105 containerd[1488]: time="2025-01-29T11:11:26.310071428Z" level=error msg="ContainerStatus for \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\": not found" Jan 29 11:11:26.310226 kubelet[2615]: E0129 11:11:26.310162 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\": not found" containerID="51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522" Jan 29 11:11:26.310226 kubelet[2615]: I0129 11:11:26.310185 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522"} err="failed to get container status \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\": rpc error: code = NotFound desc = an error occurred when try to find container \"51706c6d7e22797526cfa36d74e494f5760cb5da31df4d6befedfed55043a522\": not found" Jan 29 11:11:26.310226 kubelet[2615]: I0129 11:11:26.310204 2615 scope.go:117] "RemoveContainer" containerID="68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528" Jan 29 11:11:26.310520 containerd[1488]: time="2025-01-29T11:11:26.310472537Z" level=error msg="ContainerStatus for \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\": not found" Jan 29 11:11:26.310659 kubelet[2615]: E0129 11:11:26.310638 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\": not found" containerID="68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528" Jan 29 11:11:26.310688 kubelet[2615]: I0129 11:11:26.310663 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528"} err="failed to get container status \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\": rpc error: code = NotFound desc = an error occurred when try to find container \"68901ea3b6d301ab791b844e06fceb559db018270eb1bdf60cc66007406dc528\": not found" Jan 29 11:11:26.310688 kubelet[2615]: I0129 11:11:26.310679 2615 scope.go:117] "RemoveContainer" containerID="231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2" Jan 29 11:11:26.311516 containerd[1488]: time="2025-01-29T11:11:26.311492175Z" level=info msg="RemoveContainer for \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\"" Jan 29 11:11:26.346262 containerd[1488]: time="2025-01-29T11:11:26.346162583Z" level=info msg="RemoveContainer for \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\" returns successfully" Jan 29 11:11:26.346399 kubelet[2615]: I0129 11:11:26.346372 2615 scope.go:117] "RemoveContainer" containerID="231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2" Jan 29 11:11:26.346610 containerd[1488]: time="2025-01-29T11:11:26.346571285Z" level=error msg="ContainerStatus for \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\": not found" Jan 29 11:11:26.346809 kubelet[2615]: E0129 11:11:26.346786 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\": not found" containerID="231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2" Jan 29 11:11:26.346863 kubelet[2615]: I0129 11:11:26.346816 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2"} err="failed to get container status \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"231f1ef5e375298ed544b70f89c3ca951a97cab18543ebd61d0191fdb6d4a8e2\": not found" Jan 29 11:11:27.005208 sshd[4316]: Connection closed by 10.0.0.1 port 59680 Jan 29 11:11:27.005629 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:27.015541 systemd[1]: sshd@27-10.0.0.46:22-10.0.0.1:59680.service: Deactivated successfully. Jan 29 11:11:27.017316 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:11:27.018745 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:11:27.024038 systemd[1]: Started sshd@28-10.0.0.46:22-10.0.0.1:59690.service - OpenSSH per-connection server daemon (10.0.0.1:59690). Jan 29 11:11:27.025578 systemd-logind[1471]: Removed session 27. Jan 29 11:11:27.056378 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 59690 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:27.057635 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:27.061561 systemd-logind[1471]: New session 28 of user core. Jan 29 11:11:27.071882 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:11:27.532211 sshd[4478]: Connection closed by 10.0.0.1 port 59690 Jan 29 11:11:27.533994 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:27.544137 systemd[1]: sshd@28-10.0.0.46:22-10.0.0.1:59690.service: Deactivated successfully. Jan 29 11:11:27.546133 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:11:27.547562 kubelet[2615]: E0129 11:11:27.547510 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbe8f77f-9f94-4f7a-bbb0-d865a937b584" containerName="mount-cgroup" Jan 29 11:11:27.547562 kubelet[2615]: E0129 11:11:27.547557 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbe8f77f-9f94-4f7a-bbb0-d865a937b584" containerName="mount-bpf-fs" Jan 29 11:11:27.547988 kubelet[2615]: E0129 11:11:27.547568 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a85546d-d1a8-4f00-bee2-692cea05a194" containerName="cilium-operator" Jan 29 11:11:27.547988 kubelet[2615]: E0129 11:11:27.547577 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbe8f77f-9f94-4f7a-bbb0-d865a937b584" containerName="clean-cilium-state" Jan 29 11:11:27.547988 kubelet[2615]: E0129 11:11:27.547585 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbe8f77f-9f94-4f7a-bbb0-d865a937b584" containerName="cilium-agent" Jan 29 11:11:27.547988 kubelet[2615]: E0129 11:11:27.547594 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbe8f77f-9f94-4f7a-bbb0-d865a937b584" containerName="apply-sysctl-overwrites" Jan 29 11:11:27.547988 kubelet[2615]: I0129 11:11:27.547632 2615 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe8f77f-9f94-4f7a-bbb0-d865a937b584" containerName="cilium-agent" Jan 29 11:11:27.547988 kubelet[2615]: I0129 11:11:27.547643 2615 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a85546d-d1a8-4f00-bee2-692cea05a194" containerName="cilium-operator" Jan 29 11:11:27.549282 systemd-logind[1471]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:11:27.562164 systemd[1]: Started sshd@29-10.0.0.46:22-10.0.0.1:50880.service - OpenSSH per-connection server daemon (10.0.0.1:50880). Jan 29 11:11:27.566978 systemd-logind[1471]: Removed session 28. Jan 29 11:11:27.572537 systemd[1]: Created slice kubepods-burstable-podeb65e75b_929f_4718_a09e_7713d77a6adb.slice - libcontainer container kubepods-burstable-podeb65e75b_929f_4718_a09e_7713d77a6adb.slice. Jan 29 11:11:27.605718 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 50880 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:27.607594 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:27.613220 kubelet[2615]: I0129 11:11:27.613177 2615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a85546d-d1a8-4f00-bee2-692cea05a194" path="/var/lib/kubelet/pods/4a85546d-d1a8-4f00-bee2-692cea05a194/volumes" Jan 29 11:11:27.613422 systemd-logind[1471]: New session 29 of user core. Jan 29 11:11:27.613927 kubelet[2615]: I0129 11:11:27.613905 2615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe8f77f-9f94-4f7a-bbb0-d865a937b584" path="/var/lib/kubelet/pods/fbe8f77f-9f94-4f7a-bbb0-d865a937b584/volumes" Jan 29 11:11:27.624048 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:11:27.676909 sshd[4492]: Connection closed by 10.0.0.1 port 50880 Jan 29 11:11:27.677436 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:27.691054 systemd[1]: sshd@29-10.0.0.46:22-10.0.0.1:50880.service: Deactivated successfully. Jan 29 11:11:27.693219 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:11:27.695346 systemd-logind[1471]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:11:27.707103 systemd[1]: Started sshd@30-10.0.0.46:22-10.0.0.1:50894.service - OpenSSH per-connection server daemon (10.0.0.1:50894). Jan 29 11:11:27.708287 systemd-logind[1471]: Removed session 29. Jan 29 11:11:27.715203 kubelet[2615]: I0129 11:11:27.715167 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb65e75b-929f-4718-a09e-7713d77a6adb-clustermesh-secrets\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715291 kubelet[2615]: I0129 11:11:27.715211 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eb65e75b-929f-4718-a09e-7713d77a6adb-cilium-ipsec-secrets\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715291 kubelet[2615]: I0129 11:11:27.715232 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-host-proc-sys-net\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715353 kubelet[2615]: I0129 11:11:27.715274 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-lib-modules\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715382 kubelet[2615]: I0129 11:11:27.715367 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-hostproc\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715415 kubelet[2615]: I0129 11:11:27.715393 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-cni-path\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715441 kubelet[2615]: I0129 11:11:27.715417 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-bpf-maps\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715462 kubelet[2615]: I0129 11:11:27.715449 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-cilium-run\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715489 kubelet[2615]: I0129 11:11:27.715472 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-etc-cni-netd\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715522 kubelet[2615]: I0129 11:11:27.715493 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-xtables-lock\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715552 kubelet[2615]: I0129 11:11:27.715520 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-cilium-cgroup\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715552 kubelet[2615]: I0129 11:11:27.715542 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb65e75b-929f-4718-a09e-7713d77a6adb-cilium-config-path\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715592 kubelet[2615]: I0129 11:11:27.715575 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb65e75b-929f-4718-a09e-7713d77a6adb-host-proc-sys-kernel\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715626 kubelet[2615]: I0129 11:11:27.715597 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb65e75b-929f-4718-a09e-7713d77a6adb-hubble-tls\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.715650 kubelet[2615]: I0129 11:11:27.715632 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wgj\" (UniqueName: \"kubernetes.io/projected/eb65e75b-929f-4718-a09e-7713d77a6adb-kube-api-access-96wgj\") pod \"cilium-d5vfj\" (UID: \"eb65e75b-929f-4718-a09e-7713d77a6adb\") " pod="kube-system/cilium-d5vfj" Jan 29 11:11:27.741782 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 50894 ssh2: RSA SHA256:sXrKRGdMLS3cpjef8tChEC4L3/3e8SqSuyhEXPLIxmw Jan 29 11:11:27.743581 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:27.748722 systemd-logind[1471]: New session 30 of user core. Jan 29 11:11:27.758969 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 11:11:27.879557 kubelet[2615]: E0129 11:11:27.879404 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:27.880857 containerd[1488]: time="2025-01-29T11:11:27.880280567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5vfj,Uid:eb65e75b-929f-4718-a09e-7713d77a6adb,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:27.908938 containerd[1488]: time="2025-01-29T11:11:27.908849888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:27.908938 containerd[1488]: time="2025-01-29T11:11:27.908899903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:27.908938 containerd[1488]: time="2025-01-29T11:11:27.908915262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:27.910014 containerd[1488]: time="2025-01-29T11:11:27.909931494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:27.928915 systemd[1]: Started cri-containerd-f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a.scope - libcontainer container f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a. Jan 29 11:11:27.952277 containerd[1488]: time="2025-01-29T11:11:27.952237380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5vfj,Uid:eb65e75b-929f-4718-a09e-7713d77a6adb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\"" Jan 29 11:11:27.953396 kubelet[2615]: E0129 11:11:27.953026 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:27.955050 containerd[1488]: time="2025-01-29T11:11:27.955020652Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:11:27.971511 containerd[1488]: time="2025-01-29T11:11:27.970946887Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e\"" Jan 29 11:11:27.971511 containerd[1488]: time="2025-01-29T11:11:27.971462823Z" level=info msg="StartContainer for \"dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e\"" Jan 29 11:11:27.999924 systemd[1]: Started cri-containerd-dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e.scope - libcontainer container dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e. Jan 29 11:11:28.026571 containerd[1488]: time="2025-01-29T11:11:28.026520334Z" level=info msg="StartContainer for \"dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e\" returns successfully" Jan 29 11:11:28.037343 systemd[1]: cri-containerd-dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e.scope: Deactivated successfully. Jan 29 11:11:28.070251 containerd[1488]: time="2025-01-29T11:11:28.070166622Z" level=info msg="shim disconnected" id=dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e namespace=k8s.io Jan 29 11:11:28.070251 containerd[1488]: time="2025-01-29T11:11:28.070225213Z" level=warning msg="cleaning up after shim disconnected" id=dacb03e988d98dce17045789b0d02948b369b8dced3bc20bc47645f83fc20a5e namespace=k8s.io Jan 29 11:11:28.070251 containerd[1488]: time="2025-01-29T11:11:28.070238468Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:28.095438 kubelet[2615]: E0129 11:11:28.095402 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:28.097269 containerd[1488]: time="2025-01-29T11:11:28.097219529Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:11:28.115396 containerd[1488]: time="2025-01-29T11:11:28.115334788Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd\"" Jan 29 11:11:28.115883 containerd[1488]: time="2025-01-29T11:11:28.115851966Z" level=info msg="StartContainer for \"7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd\"" Jan 29 11:11:28.150964 systemd[1]: Started cri-containerd-7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd.scope - libcontainer container 7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd. Jan 29 11:11:28.178240 containerd[1488]: time="2025-01-29T11:11:28.178176488Z" level=info msg="StartContainer for \"7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd\" returns successfully" Jan 29 11:11:28.184363 systemd[1]: cri-containerd-7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd.scope: Deactivated successfully. Jan 29 11:11:28.209631 containerd[1488]: time="2025-01-29T11:11:28.209551066Z" level=info msg="shim disconnected" id=7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd namespace=k8s.io Jan 29 11:11:28.209631 containerd[1488]: time="2025-01-29T11:11:28.209626298Z" level=warning msg="cleaning up after shim disconnected" id=7c29e1856d86cf999b70511cc595260e4b264f0fbc5af7b21dee154f3fae2edd namespace=k8s.io Jan 29 11:11:28.209631 containerd[1488]: time="2025-01-29T11:11:28.209635355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:28.670914 kubelet[2615]: E0129 11:11:28.670868 2615 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:11:29.098267 kubelet[2615]: E0129 11:11:29.098200 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:29.100448 containerd[1488]: time="2025-01-29T11:11:29.099972017Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:11:29.119742 containerd[1488]: time="2025-01-29T11:11:29.119699709Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b\"" Jan 29 11:11:29.120286 containerd[1488]: time="2025-01-29T11:11:29.120263295Z" level=info msg="StartContainer for \"9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b\"" Jan 29 11:11:29.155997 systemd[1]: Started cri-containerd-9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b.scope - libcontainer container 9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b. Jan 29 11:11:29.189223 systemd[1]: cri-containerd-9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b.scope: Deactivated successfully. Jan 29 11:11:29.192143 containerd[1488]: time="2025-01-29T11:11:29.192113757Z" level=info msg="StartContainer for \"9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b\" returns successfully" Jan 29 11:11:29.216343 containerd[1488]: time="2025-01-29T11:11:29.216263258Z" level=info msg="shim disconnected" id=9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b namespace=k8s.io Jan 29 11:11:29.216343 containerd[1488]: time="2025-01-29T11:11:29.216335886Z" level=warning msg="cleaning up after shim disconnected" id=9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b namespace=k8s.io Jan 29 11:11:29.216343 containerd[1488]: time="2025-01-29T11:11:29.216348039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:29.820812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c250378421361c185acdde15fadef5d09fb80bbe27d72839da8e6262fd35e5b-rootfs.mount: Deactivated successfully. Jan 29 11:11:30.101467 kubelet[2615]: E0129 11:11:30.101350 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:30.103601 containerd[1488]: time="2025-01-29T11:11:30.103552276Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:11:30.125009 containerd[1488]: time="2025-01-29T11:11:30.124964397Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db\"" Jan 29 11:11:30.125546 containerd[1488]: time="2025-01-29T11:11:30.125511180Z" level=info msg="StartContainer for \"cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db\"" Jan 29 11:11:30.165004 systemd[1]: Started cri-containerd-cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db.scope - libcontainer container cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db. Jan 29 11:11:30.190484 systemd[1]: cri-containerd-cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db.scope: Deactivated successfully. Jan 29 11:11:30.193107 containerd[1488]: time="2025-01-29T11:11:30.193073600Z" level=info msg="StartContainer for \"cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db\" returns successfully" Jan 29 11:11:30.218629 containerd[1488]: time="2025-01-29T11:11:30.218536589Z" level=info msg="shim disconnected" id=cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db namespace=k8s.io Jan 29 11:11:30.218629 containerd[1488]: time="2025-01-29T11:11:30.218613644Z" level=warning msg="cleaning up after shim disconnected" id=cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db namespace=k8s.io Jan 29 11:11:30.218629 containerd[1488]: time="2025-01-29T11:11:30.218623162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:30.820873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb2f280c1ed6cfcbcf4b30a5368fac7d1c9c031b188453ca871b41d28c5b31db-rootfs.mount: Deactivated successfully. Jan 29 11:11:31.105784 kubelet[2615]: E0129 11:11:31.105654 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:31.107057 containerd[1488]: time="2025-01-29T11:11:31.107022112Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:11:31.649378 containerd[1488]: time="2025-01-29T11:11:31.649322481Z" level=info msg="CreateContainer within sandbox \"f96f206284844ce5941c06238ba51d4bfcbc3c5832bce53652ba5db5c869933a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"23cd62cb44b40b4cadddccba428817b7f38fad2c5e141eb418ffa3152949847d\"" Jan 29 11:11:31.650084 containerd[1488]: time="2025-01-29T11:11:31.649929647Z" level=info msg="StartContainer for \"23cd62cb44b40b4cadddccba428817b7f38fad2c5e141eb418ffa3152949847d\"" Jan 29 11:11:31.679890 systemd[1]: Started cri-containerd-23cd62cb44b40b4cadddccba428817b7f38fad2c5e141eb418ffa3152949847d.scope - libcontainer container 23cd62cb44b40b4cadddccba428817b7f38fad2c5e141eb418ffa3152949847d. Jan 29 11:11:31.804545 containerd[1488]: time="2025-01-29T11:11:31.804491597Z" level=info msg="StartContainer for \"23cd62cb44b40b4cadddccba428817b7f38fad2c5e141eb418ffa3152949847d\" returns successfully" Jan 29 11:11:32.109151 kubelet[2615]: E0129 11:11:32.109124 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:32.243791 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:11:33.880690 kubelet[2615]: E0129 11:11:33.880655 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:35.759217 systemd-networkd[1420]: lxc_health: Link UP Jan 29 11:11:35.768916 systemd-networkd[1420]: lxc_health: Gained carrier Jan 29 11:11:35.882031 kubelet[2615]: E0129 11:11:35.880918 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:35.904791 kubelet[2615]: I0129 11:11:35.903194 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d5vfj" podStartSLOduration=8.903175541 podStartE2EDuration="8.903175541s" podCreationTimestamp="2025-01-29 11:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:11:32.361789012 +0000 UTC m=+108.837060680" watchObservedRunningTime="2025-01-29 11:11:35.903175541 +0000 UTC m=+112.378447189" Jan 29 11:11:36.115895 kubelet[2615]: E0129 11:11:36.115863 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:37.117847 kubelet[2615]: E0129 11:11:37.117806 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:37.270511 systemd-networkd[1420]: lxc_health: Gained IPv6LL Jan 29 11:11:42.610215 kubelet[2615]: E0129 11:11:42.610157 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:11:42.991456 sshd[4500]: Connection closed by 10.0.0.1 port 50894 Jan 29 11:11:42.991876 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:42.995726 systemd[1]: sshd@30-10.0.0.46:22-10.0.0.1:50894.service: Deactivated successfully. Jan 29 11:11:42.997558 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 11:11:42.998152 systemd-logind[1471]: Session 30 logged out. Waiting for processes to exit. Jan 29 11:11:42.999044 systemd-logind[1471]: Removed session 30. Jan 29 11:11:43.602881 containerd[1488]: time="2025-01-29T11:11:43.602822563Z" level=info msg="StopPodSandbox for \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\"" Jan 29 11:11:43.603247 containerd[1488]: time="2025-01-29T11:11:43.602931548Z" level=info msg="TearDown network for sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" successfully" Jan 29 11:11:43.603247 containerd[1488]: time="2025-01-29T11:11:43.602944693Z" level=info msg="StopPodSandbox for \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" returns successfully" Jan 29 11:11:43.603391 containerd[1488]: time="2025-01-29T11:11:43.603271810Z" level=info msg="RemovePodSandbox for \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\"" Jan 29 11:11:43.603391 containerd[1488]: time="2025-01-29T11:11:43.603293522Z" level=info msg="Forcibly stopping sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\"" Jan 29 11:11:43.603391 containerd[1488]: time="2025-01-29T11:11:43.603336122Z" level=info msg="TearDown network for sandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" successfully" Jan 29 11:11:43.685239 containerd[1488]: time="2025-01-29T11:11:43.685174566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:11:43.685239 containerd[1488]: time="2025-01-29T11:11:43.685237746Z" level=info msg="RemovePodSandbox \"5408030f5c21244876249501efc69b5c153f8f6cd85a58d3435d99fa104188d7\" returns successfully" Jan 29 11:11:43.685684 containerd[1488]: time="2025-01-29T11:11:43.685632712Z" level=info msg="StopPodSandbox for \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\"" Jan 29 11:11:43.685851 containerd[1488]: time="2025-01-29T11:11:43.685749942Z" level=info msg="TearDown network for sandbox \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" successfully" Jan 29 11:11:43.685851 containerd[1488]: time="2025-01-29T11:11:43.685782333Z" level=info msg="StopPodSandbox for \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" returns successfully" Jan 29 11:11:43.686074 containerd[1488]: time="2025-01-29T11:11:43.686047525Z" level=info msg="RemovePodSandbox for \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\"" Jan 29 11:11:43.686123 containerd[1488]: time="2025-01-29T11:11:43.686075056Z" level=info msg="Forcibly stopping sandbox \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\"" Jan 29 11:11:43.686185 containerd[1488]: time="2025-01-29T11:11:43.686132735Z" level=info msg="TearDown network for sandbox \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" successfully" Jan 29 11:11:43.799573 containerd[1488]: time="2025-01-29T11:11:43.799489598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:11:43.799752 containerd[1488]: time="2025-01-29T11:11:43.799597562Z" level=info msg="RemovePodSandbox \"821ccc1405ea127246b0ec4cc47575d73ec92490dbc0126c8c6c78df94a8bdf4\" returns successfully"