Jan 13 21:18:21.942501 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:18:21.942534 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:18:21.942552 kernel: BIOS-provided physical RAM map: Jan 13 21:18:21.942561 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 21:18:21.942570 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 21:18:21.942579 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 21:18:21.942590 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 21:18:21.942599 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 21:18:21.942607 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 13 21:18:21.942617 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 13 21:18:21.942641 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 13 21:18:21.942651 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 13 21:18:21.942665 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 13 21:18:21.942676 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 13 21:18:21.942692 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 13 21:18:21.942702 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 21:18:21.942717 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 13 21:18:21.942727 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 13 21:18:21.942738 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 21:18:21.942748 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:18:21.942757 kernel: NX (Execute Disable) protection: active Jan 13 21:18:21.942767 kernel: APIC: Static calls initialized Jan 13 21:18:21.942777 kernel: efi: EFI v2.7 by EDK II Jan 13 21:18:21.942787 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 13 21:18:21.942796 kernel: SMBIOS 2.8 present. Jan 13 21:18:21.942805 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 13 21:18:21.942813 kernel: Hypervisor detected: KVM Jan 13 21:18:21.942826 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:18:21.942835 kernel: kvm-clock: using sched offset of 5355921859 cycles Jan 13 21:18:21.942845 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:18:21.942854 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:18:21.942919 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:18:21.942931 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:18:21.942941 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 13 21:18:21.942953 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 21:18:21.942963 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:18:21.942980 kernel: Using GB pages for direct mapping Jan 13 21:18:21.942990 kernel: Secure boot disabled Jan 13 21:18:21.943001 kernel: ACPI: Early table checksum verification disabled Jan 13 21:18:21.943011 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 21:18:21.943028 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:18:21.943039 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:21.943050 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:21.943066 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 21:18:21.943077 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:21.943093 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:21.943105 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:21.943116 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:21.943127 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 21:18:21.943139 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 21:18:21.943154 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 21:18:21.943166 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 21:18:21.943177 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 21:18:21.943189 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 21:18:21.943200 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 21:18:21.943210 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 21:18:21.943221 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 21:18:21.943231 kernel: No NUMA configuration found Jan 13 21:18:21.943247 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 13 21:18:21.943262 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 13 21:18:21.943274 kernel: Zone ranges: Jan 13 21:18:21.943285 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:18:21.943296 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 13 21:18:21.943307 kernel: Normal empty Jan 13 21:18:21.943319 kernel: Movable zone start for each node Jan 13 21:18:21.943329 kernel: Early memory node ranges Jan 13 21:18:21.943341 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 21:18:21.943352 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 21:18:21.943368 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 21:18:21.943379 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 13 21:18:21.943391 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 13 21:18:21.943402 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 13 21:18:21.943417 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 13 21:18:21.943428 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:18:21.943439 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 21:18:21.943449 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 21:18:21.943459 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:18:21.943469 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 13 21:18:21.943485 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 13 21:18:21.943496 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 13 21:18:21.943506 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:18:21.943516 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:18:21.943526 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:18:21.943536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:18:21.943547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:18:21.943558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:18:21.943568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:18:21.943584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:18:21.943596 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:18:21.943607 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:18:21.943618 kernel: TSC deadline timer available Jan 13 21:18:21.943639 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:18:21.943650 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:18:21.943662 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:18:21.943673 kernel: kvm-guest: setup PV sched yield Jan 13 21:18:21.943684 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:18:21.943700 kernel: Booting paravirtualized kernel on KVM Jan 13 21:18:21.943712 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:18:21.943723 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:18:21.943734 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:18:21.943746 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:18:21.943757 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:18:21.943767 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:18:21.943778 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:18:21.943791 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:18:21.943814 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:18:21.943825 kernel: random: crng init done Jan 13 21:18:21.943835 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:18:21.943846 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:18:21.943856 kernel: Fallback order for Node 0: 0 Jan 13 21:18:21.943867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 13 21:18:21.943932 kernel: Policy zone: DMA32 Jan 13 21:18:21.943947 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:18:21.943965 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 171128K reserved, 0K cma-reserved) Jan 13 21:18:21.943977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:18:21.943988 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:18:21.943998 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:18:21.944010 kernel: Dynamic Preempt: voluntary Jan 13 21:18:21.944033 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:18:21.944049 kernel: rcu: RCU event tracing is enabled. Jan 13 21:18:21.944061 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:18:21.944073 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:18:21.944085 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:18:21.944097 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:18:21.944108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:18:21.944125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:18:21.944137 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:18:21.944155 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:18:21.944166 kernel: Console: colour dummy device 80x25 Jan 13 21:18:21.944182 kernel: printk: console [ttyS0] enabled Jan 13 21:18:21.944193 kernel: ACPI: Core revision 20230628 Jan 13 21:18:21.944204 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:18:21.944214 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:18:21.944225 kernel: x2apic enabled Jan 13 21:18:21.944235 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:18:21.944246 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:18:21.944257 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:18:21.944267 kernel: kvm-guest: setup PV IPIs Jan 13 21:18:21.944278 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:18:21.944293 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:18:21.944304 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:18:21.944314 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:18:21.944324 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:18:21.944335 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:18:21.944346 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:18:21.944356 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:18:21.944367 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:18:21.944382 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:18:21.944393 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:18:21.944404 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:18:21.944414 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:18:21.944426 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:18:21.944441 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:18:21.944452 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:18:21.944464 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:18:21.944475 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:18:21.944491 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:18:21.944502 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:18:21.944512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:18:21.944523 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:18:21.944533 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:18:21.944544 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:18:21.944555 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:18:21.944566 kernel: landlock: Up and running. Jan 13 21:18:21.944577 kernel: SELinux: Initializing. Jan 13 21:18:21.944594 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:18:21.944604 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:18:21.944615 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:18:21.944635 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:18:21.944646 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:18:21.944657 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:18:21.944667 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:18:21.944677 kernel: ... version: 0 Jan 13 21:18:21.944694 kernel: ... bit width: 48 Jan 13 21:18:21.944705 kernel: ... generic registers: 6 Jan 13 21:18:21.944717 kernel: ... value mask: 0000ffffffffffff Jan 13 21:18:21.944729 kernel: ... max period: 00007fffffffffff Jan 13 21:18:21.944739 kernel: ... fixed-purpose events: 0 Jan 13 21:18:21.944750 kernel: ... event mask: 000000000000003f Jan 13 21:18:21.944760 kernel: signal: max sigframe size: 1776 Jan 13 21:18:21.944771 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:18:21.944783 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:18:21.944794 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:18:21.944810 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:18:21.944820 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:18:21.944831 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:18:21.944841 kernel: smpboot: Max logical packages: 1 Jan 13 21:18:21.944852 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:18:21.944862 kernel: devtmpfs: initialized Jan 13 21:18:21.944872 kernel: x86/mm: Memory block size: 128MB Jan 13 21:18:21.944899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 21:18:21.944909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 21:18:21.944925 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 13 21:18:21.944935 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 21:18:21.944945 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 21:18:21.944956 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:18:21.944967 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:18:21.944977 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:18:21.944987 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:18:21.944997 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:18:21.945007 kernel: audit: type=2000 audit(1736803101.178:1): state=initialized audit_enabled=0 res=1 Jan 13 21:18:21.945022 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:18:21.945033 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:18:21.945044 kernel: cpuidle: using governor menu Jan 13 21:18:21.945055 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:18:21.945066 kernel: dca service started, version 1.12.1 Jan 13 21:18:21.945077 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:18:21.945088 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:18:21.945098 kernel: PCI: Using configuration type 1 for base access Jan 13 21:18:21.945114 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:18:21.945126 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:18:21.945137 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:18:21.945148 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:18:21.945158 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:18:21.945170 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:18:21.945181 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:18:21.945193 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:18:21.945205 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:18:21.945222 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:18:21.945234 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:18:21.945245 kernel: ACPI: Interpreter enabled Jan 13 21:18:21.945255 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:18:21.945267 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:18:21.945278 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:18:21.945289 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:18:21.945301 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:18:21.945312 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:18:21.945658 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:18:21.945873 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:18:21.946098 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:18:21.946119 kernel: PCI host bridge to bus 0000:00 Jan 13 21:18:21.946321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:18:21.946503 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:18:21.946696 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:18:21.946867 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:18:21.947054 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:18:21.947206 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 13 21:18:21.947364 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:18:21.947573 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:18:21.947779 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:18:21.947996 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 21:18:21.948178 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 21:18:21.948429 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 21:18:21.948718 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 21:18:21.948905 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:18:21.949090 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:18:21.949266 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 21:18:21.949447 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 21:18:21.949638 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 13 21:18:21.950003 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:18:21.950274 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 21:18:21.950475 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 21:18:21.950681 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 13 21:18:21.950917 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:18:21.951116 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 21:18:21.951307 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 21:18:21.951533 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 13 21:18:21.951776 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 21:18:21.952019 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:18:21.952213 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:18:21.952425 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:18:21.952694 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 21:18:21.952963 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 21:18:21.953187 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:18:21.953371 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 21:18:21.953391 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:18:21.953403 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:18:21.953415 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:18:21.953434 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:18:21.953446 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:18:21.953457 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:18:21.953469 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:18:21.953481 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:18:21.953493 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:18:21.953505 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:18:21.953517 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:18:21.953528 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:18:21.953546 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:18:21.953558 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:18:21.953569 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:18:21.953581 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:18:21.953593 kernel: iommu: Default domain type: Translated Jan 13 21:18:21.953606 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:18:21.953617 kernel: efivars: Registered efivars operations Jan 13 21:18:21.953643 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:18:21.953656 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:18:21.953674 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 21:18:21.953686 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 13 21:18:21.953699 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 13 21:18:21.953711 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 13 21:18:21.953921 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:18:21.954115 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:18:21.954295 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:18:21.954314 kernel: vgaarb: loaded Jan 13 21:18:21.954326 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:18:21.954345 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:18:21.954358 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:18:21.954370 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:18:21.954382 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:18:21.954394 kernel: pnp: PnP ACPI init Jan 13 21:18:21.954617 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:18:21.954650 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:18:21.954663 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:18:21.954682 kernel: NET: Registered PF_INET protocol family Jan 13 21:18:21.954694 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:18:21.954706 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:18:21.954717 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:18:21.954729 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:18:21.954740 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:18:21.954750 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:18:21.954761 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:18:21.954771 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:18:21.954787 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:18:21.954798 kernel: NET: Registered PF_XDP protocol family Jan 13 21:18:21.954994 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 21:18:21.955172 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 21:18:21.955335 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:18:21.955495 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:18:21.955669 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:18:21.955840 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:18:21.956061 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:18:21.956220 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 13 21:18:21.956237 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:18:21.956248 kernel: Initialise system trusted keyrings Jan 13 21:18:21.956259 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:18:21.956270 kernel: Key type asymmetric registered Jan 13 21:18:21.956281 kernel: Asymmetric key parser 'x509' registered Jan 13 21:18:21.956292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:18:21.956311 kernel: io scheduler mq-deadline registered Jan 13 21:18:21.956323 kernel: io scheduler kyber registered Jan 13 21:18:21.956335 kernel: io scheduler bfq registered Jan 13 21:18:21.956346 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:18:21.956358 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:18:21.956369 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:18:21.956381 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:18:21.956393 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:18:21.956405 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:18:21.956423 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:18:21.956435 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:18:21.956448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:18:21.956713 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:18:21.956735 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:18:21.956929 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:18:21.957108 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:18:21 UTC (1736803101) Jan 13 21:18:21.957287 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:18:21.957341 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:18:21.957354 kernel: efifb: probing for efifb Jan 13 21:18:21.957367 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 13 21:18:21.957378 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 13 21:18:21.957391 kernel: efifb: scrolling: redraw Jan 13 21:18:21.957403 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 13 21:18:21.957415 kernel: Console: switching to colour frame buffer device 100x37 Jan 13 21:18:21.957453 kernel: fb0: EFI VGA frame buffer device Jan 13 21:18:21.957471 kernel: pstore: Using crash dump compression: deflate Jan 13 21:18:21.957488 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:18:21.957501 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:18:21.957514 kernel: Segment Routing with IPv6 Jan 13 21:18:21.957526 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:18:21.957538 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:18:21.957551 kernel: Key type dns_resolver registered Jan 13 21:18:21.957564 kernel: IPI shorthand broadcast: enabled Jan 13 21:18:21.957575 kernel: sched_clock: Marking stable (1316005412, 157148697)->(1549441626, -76287517) Jan 13 21:18:21.957587 kernel: registered taskstats version 1 Jan 13 21:18:21.957605 kernel: Loading compiled-in X.509 certificates Jan 13 21:18:21.957618 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:18:21.957642 kernel: Key type .fscrypt registered Jan 13 21:18:21.957655 kernel: Key type fscrypt-provisioning registered Jan 13 21:18:21.957667 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:18:21.957680 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:18:21.957692 kernel: ima: No architecture policies found Jan 13 21:18:21.957704 kernel: clk: Disabling unused clocks Jan 13 21:18:21.957716 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:18:21.957735 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:18:21.957747 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:18:21.957760 kernel: Run /init as init process Jan 13 21:18:21.957772 kernel: with arguments: Jan 13 21:18:21.957785 kernel: /init Jan 13 21:18:21.957797 kernel: with environment: Jan 13 21:18:21.957814 kernel: HOME=/ Jan 13 21:18:21.957826 kernel: TERM=linux Jan 13 21:18:21.957838 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:18:21.957859 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:18:21.957892 systemd[1]: Detected virtualization kvm. Jan 13 21:18:21.957908 systemd[1]: Detected architecture x86-64. Jan 13 21:18:21.957921 systemd[1]: Running in initrd. Jan 13 21:18:21.957944 systemd[1]: No hostname configured, using default hostname. Jan 13 21:18:21.957956 systemd[1]: Hostname set to . Jan 13 21:18:21.957970 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:18:21.957982 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:18:21.957995 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:18:21.958009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:18:21.958024 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:18:21.958038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:18:21.958057 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:18:21.958071 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:18:21.958085 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:18:21.958099 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:18:21.958112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:18:21.958125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:18:21.958138 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:18:21.958159 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:18:21.958172 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:18:21.958184 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:18:21.958197 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:18:21.958210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:18:21.958223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:18:21.958236 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:18:21.958249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:18:21.958268 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:18:21.958282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:18:21.958295 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:18:21.958309 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:18:21.958322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:18:21.958335 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:18:21.958348 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:18:21.958361 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:18:21.958375 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:18:21.958393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:21.958406 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:18:21.958420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:18:21.958433 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:18:21.958447 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:18:21.958501 systemd-journald[194]: Collecting audit messages is disabled. Jan 13 21:18:21.958536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:18:21.958550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:18:21.958571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:21.958585 systemd-journald[194]: Journal started Jan 13 21:18:21.958611 systemd-journald[194]: Runtime Journal (/run/log/journal/8016aa5202fb4c518c975d780c5dd862) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:18:21.950298 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 21:18:21.962069 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:18:21.964901 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:18:21.966473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:18:21.970677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:18:21.983843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:21.994909 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:18:21.997601 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 21:18:21.998751 kernel: Bridge firewalling registered Jan 13 21:18:22.000248 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:18:22.002242 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:18:22.005223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:18:22.008735 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:18:22.016344 dracut-cmdline[222]: dracut-dracut-053 Jan 13 21:18:22.023991 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:18:22.040640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:18:22.047254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:18:22.094689 systemd-resolved[255]: Positive Trust Anchors: Jan 13 21:18:22.094710 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:18:22.094754 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:18:22.098403 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 13 21:18:22.100037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:18:22.105281 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:18:22.136946 kernel: SCSI subsystem initialized Jan 13 21:18:22.148937 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:18:22.160937 kernel: iscsi: registered transport (tcp) Jan 13 21:18:22.189265 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:18:22.189357 kernel: QLogic iSCSI HBA Driver Jan 13 21:18:22.251511 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:18:22.265230 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:18:22.294715 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:18:22.294806 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:18:22.294823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:18:22.343947 kernel: raid6: avx2x4 gen() 19756 MB/s Jan 13 21:18:22.360941 kernel: raid6: avx2x2 gen() 19668 MB/s Jan 13 21:18:22.378370 kernel: raid6: avx2x1 gen() 16423 MB/s Jan 13 21:18:22.378442 kernel: raid6: using algorithm avx2x4 gen() 19756 MB/s Jan 13 21:18:22.396371 kernel: raid6: .... xor() 5792 MB/s, rmw enabled Jan 13 21:18:22.396433 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:18:22.423915 kernel: xor: automatically using best checksumming function avx Jan 13 21:18:22.640926 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:18:22.660227 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:18:22.674195 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:18:22.695431 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 21:18:22.702317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:18:22.707266 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:18:22.728345 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 13 21:18:22.772673 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:18:22.778214 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:18:22.850322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:18:22.861269 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:18:22.877705 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:18:22.880136 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:18:22.883749 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:18:22.886495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:18:22.896142 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:18:22.902908 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:18:22.912947 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:18:22.946278 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:18:22.946303 kernel: AES CTR mode by8 optimization enabled Jan 13 21:18:22.946320 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:18:22.946524 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:18:22.946553 kernel: GPT:9289727 != 19775487 Jan 13 21:18:22.946569 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:18:22.946584 kernel: GPT:9289727 != 19775487 Jan 13 21:18:22.946598 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:18:22.946625 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:18:22.917678 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:18:22.917763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:22.923541 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:18:22.925221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:18:22.925323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:22.965035 kernel: libata version 3.00 loaded. Jan 13 21:18:22.926865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:22.943094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:22.974869 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) Jan 13 21:18:22.974922 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (475) Jan 13 21:18:22.944760 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:18:22.978089 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:18:23.002108 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:18:23.002131 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:18:23.002400 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:18:23.002625 kernel: scsi host0: ahci Jan 13 21:18:23.002855 kernel: scsi host1: ahci Jan 13 21:18:23.004051 kernel: scsi host2: ahci Jan 13 21:18:23.004260 kernel: scsi host3: ahci Jan 13 21:18:23.004442 kernel: scsi host4: ahci Jan 13 21:18:23.004651 kernel: scsi host5: ahci Jan 13 21:18:23.004840 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 21:18:23.004856 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 21:18:23.004870 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 21:18:23.004902 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 21:18:23.004917 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 21:18:23.004930 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 21:18:22.954418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:18:22.954642 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:22.988117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:23.002185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:18:23.015592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:23.032222 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:18:23.042845 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:18:23.051793 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:18:23.056049 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:18:23.069141 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:18:23.073344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:18:23.078491 disk-uuid[568]: Primary Header is updated. Jan 13 21:18:23.078491 disk-uuid[568]: Secondary Entries is updated. Jan 13 21:18:23.078491 disk-uuid[568]: Secondary Header is updated. Jan 13 21:18:23.082803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:18:23.087910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:18:23.098733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:23.306921 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:18:23.307004 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:18:23.307918 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:18:23.310439 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:18:23.310471 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:18:23.310488 kernel: ata3.00: applying bridge limits Jan 13 21:18:23.311176 kernel: ata3.00: configured for UDMA/100 Jan 13 21:18:23.311946 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:18:23.314917 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:18:23.314949 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:18:23.366086 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:18:23.382924 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:18:23.382945 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:18:24.089911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:18:24.089978 disk-uuid[571]: The operation has completed successfully. Jan 13 21:18:24.125062 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:18:24.125231 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:18:24.158110 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:18:24.162434 sh[596]: Success Jan 13 21:18:24.176910 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:18:24.218132 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:18:24.231766 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:18:24.235307 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:18:24.247673 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:18:24.247740 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:18:24.247756 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:18:24.248712 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:18:24.249460 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:18:24.255630 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:18:24.256669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:18:24.271274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:18:24.274660 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:18:24.285649 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:18:24.285702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:18:24.285717 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:18:24.289976 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:18:24.300865 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:18:24.303010 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:18:24.313272 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:18:24.322121 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:18:24.394852 ignition[692]: Ignition 2.19.0 Jan 13 21:18:24.395360 ignition[692]: Stage: fetch-offline Jan 13 21:18:24.395412 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:24.395426 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:24.395575 ignition[692]: parsed url from cmdline: "" Jan 13 21:18:24.395581 ignition[692]: no config URL provided Jan 13 21:18:24.395588 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:18:24.395600 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:18:24.395640 ignition[692]: op(1): [started] loading QEMU firmware config module Jan 13 21:18:24.395646 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:18:24.410820 ignition[692]: op(1): [finished] loading QEMU firmware config module Jan 13 21:18:24.411815 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:18:24.432138 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:18:24.455526 ignition[692]: parsing config with SHA512: 2add410d533155d8a65b53f5ac12ec9a116e109c316c3d9a9c2b2cb8ef45f13f87ce5a938942607c6dd64d80e22cb961a2b2528c619d8009b80a1d875c979bec Jan 13 21:18:24.460205 unknown[692]: fetched base config from "system" Jan 13 21:18:24.460233 unknown[692]: fetched user config from "qemu" Jan 13 21:18:24.461410 ignition[692]: fetch-offline: fetch-offline passed Jan 13 21:18:24.461600 ignition[692]: Ignition finished successfully Jan 13 21:18:24.464370 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:18:24.467214 systemd-networkd[784]: lo: Link UP Jan 13 21:18:24.467226 systemd-networkd[784]: lo: Gained carrier Jan 13 21:18:24.469466 systemd-networkd[784]: Enumeration completed Jan 13 21:18:24.469708 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:18:24.469994 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:24.470000 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:18:24.471296 systemd-networkd[784]: eth0: Link UP Jan 13 21:18:24.471301 systemd-networkd[784]: eth0: Gained carrier Jan 13 21:18:24.471310 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:24.472177 systemd[1]: Reached target network.target - Network. Jan 13 21:18:24.474243 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:18:24.484954 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:18:24.485178 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:18:24.500431 ignition[787]: Ignition 2.19.0 Jan 13 21:18:24.500444 ignition[787]: Stage: kargs Jan 13 21:18:24.500625 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:24.500638 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:24.501504 ignition[787]: kargs: kargs passed Jan 13 21:18:24.501555 ignition[787]: Ignition finished successfully Jan 13 21:18:24.505622 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:18:24.516187 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:18:24.529012 ignition[796]: Ignition 2.19.0 Jan 13 21:18:24.529025 ignition[796]: Stage: disks Jan 13 21:18:24.529244 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:24.529260 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:24.532957 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:18:24.530338 ignition[796]: disks: disks passed Jan 13 21:18:24.534289 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:18:24.530402 ignition[796]: Ignition finished successfully Jan 13 21:18:24.536149 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:18:24.538017 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:18:24.540077 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:18:24.541838 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:18:24.554254 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:18:24.569613 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:18:24.577011 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:18:24.596111 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:18:24.689919 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:18:24.691030 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:18:24.693480 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:18:24.716011 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:18:24.718919 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:18:24.719383 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:18:24.719444 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:18:24.727827 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Jan 13 21:18:24.727856 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:18:24.719478 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:18:24.732247 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:18:24.732263 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:18:24.734909 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:18:24.750036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:18:24.755131 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:18:24.757031 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:18:24.798731 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:18:24.804083 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:18:24.811168 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:18:24.817703 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:18:24.920766 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:18:24.939116 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:18:24.941274 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:18:24.949897 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:18:24.979218 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:18:24.998920 ignition[928]: INFO : Ignition 2.19.0 Jan 13 21:18:24.998920 ignition[928]: INFO : Stage: mount Jan 13 21:18:25.000798 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:25.000798 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:25.000798 ignition[928]: INFO : mount: mount passed Jan 13 21:18:25.000798 ignition[928]: INFO : Ignition finished successfully Jan 13 21:18:25.006602 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:18:25.020028 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:18:25.247137 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:18:25.259274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:18:25.269083 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Jan 13 21:18:25.269144 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:18:25.269161 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:18:25.270282 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:18:25.273914 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:18:25.276506 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:18:25.306259 ignition[958]: INFO : Ignition 2.19.0 Jan 13 21:18:25.306259 ignition[958]: INFO : Stage: files Jan 13 21:18:25.308661 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:25.308661 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:25.308661 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:18:25.312914 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:18:25.312914 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:18:25.335073 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:18:25.336819 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:18:25.336819 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:18:25.336085 unknown[958]: wrote ssh authorized keys file for user: core Jan 13 21:18:25.340949 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:18:25.340949 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:18:25.386912 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:18:25.547784 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:18:25.547784 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:18:25.547784 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:18:25.764190 systemd-networkd[784]: eth0: Gained IPv6LL Jan 13 21:18:26.118788 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:18:26.225396 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:18:26.227460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:18:26.243288 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:18:26.243288 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:18:26.243288 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:18:26.243288 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:18:26.243288 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:18:26.650106 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:18:27.050370 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:18:27.050370 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 21:18:27.055164 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:18:27.082176 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:18:27.088251 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:18:27.090371 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:18:27.090371 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:18:27.090371 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:18:27.090371 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:18:27.090371 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:18:27.090371 ignition[958]: INFO : files: files passed Jan 13 21:18:27.090371 ignition[958]: INFO : Ignition finished successfully Jan 13 21:18:27.092210 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:18:27.104221 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:18:27.106771 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:18:27.109257 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:18:27.109415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:18:27.118562 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:18:27.121616 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:18:27.121616 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:18:27.125921 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:18:27.129682 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:18:27.130057 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:18:27.141047 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:18:27.178079 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:18:27.178272 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:18:27.179829 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:18:27.182415 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:18:27.184812 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:18:27.188944 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:18:27.214366 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:18:27.233195 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:18:27.245803 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:18:27.248778 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:18:27.251735 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:18:27.251970 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:18:27.252180 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:18:27.257795 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:18:27.258021 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:18:27.260332 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:18:27.260694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:18:27.266313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:18:27.269017 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:18:27.271746 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:18:27.273015 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:18:27.277174 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:18:27.279621 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:18:27.281827 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:18:27.282003 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:18:27.286529 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:18:27.286705 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:18:27.287278 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:18:27.287410 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:18:27.293397 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:18:27.293549 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:18:27.297605 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:18:27.297738 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:18:27.300295 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:18:27.302556 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:18:27.305977 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:18:27.306367 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:18:27.310196 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:18:27.313222 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:18:27.313339 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:18:27.315529 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:18:27.315644 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:18:27.317867 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:18:27.318038 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:18:27.320391 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:18:27.320525 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:18:27.340151 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:18:27.341142 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:18:27.343665 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:18:27.343868 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:18:27.347588 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:18:27.347785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:18:27.353097 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:18:27.353333 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:18:27.360981 ignition[1012]: INFO : Ignition 2.19.0 Jan 13 21:18:27.360981 ignition[1012]: INFO : Stage: umount Jan 13 21:18:27.363245 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:27.363245 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:27.363245 ignition[1012]: INFO : umount: umount passed Jan 13 21:18:27.363245 ignition[1012]: INFO : Ignition finished successfully Jan 13 21:18:27.364093 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:18:27.364242 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:18:27.367861 systemd[1]: Stopped target network.target - Network. Jan 13 21:18:27.369218 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:18:27.369317 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:18:27.371595 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:18:27.371651 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:18:27.373982 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:18:27.374044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:18:27.376360 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:18:27.376419 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:18:27.378912 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:18:27.381328 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:18:27.382952 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 13 21:18:27.385686 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:18:27.385844 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:18:27.386354 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:18:27.386411 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:18:27.403047 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:18:27.405567 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:18:27.405669 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:18:27.410612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:18:27.414963 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:18:27.415178 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:18:27.429570 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:18:27.431791 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:18:27.431961 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:18:27.435987 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:18:27.436186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:18:27.438724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:18:27.438793 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:18:27.440362 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:18:27.440405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:18:27.445479 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:18:27.445572 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:18:27.449464 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:18:27.449533 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:18:27.455165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:18:27.455231 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:27.469150 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:18:27.470576 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:18:27.470649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:18:27.473414 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:18:27.473500 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:18:27.475846 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:18:27.475925 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:18:27.478637 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:18:27.478692 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:18:27.481454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:18:27.481516 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:18:27.484392 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:18:27.484444 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:18:27.487291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:18:27.487343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:27.487891 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:18:27.488027 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:18:27.660769 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:18:27.660942 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:18:27.662391 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:18:27.664364 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:18:27.664425 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:18:27.682147 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:18:27.690602 systemd[1]: Switching root. Jan 13 21:18:27.717996 systemd-journald[194]: Journal stopped Jan 13 21:18:29.149606 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 13 21:18:29.149675 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:18:29.149704 kernel: SELinux: policy capability open_perms=1 Jan 13 21:18:29.149716 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:18:29.149737 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:18:29.149749 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:18:29.149760 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:18:29.149772 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:18:29.149784 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:18:29.149795 kernel: audit: type=1403 audit(1736803108.262:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:18:29.149808 systemd[1]: Successfully loaded SELinux policy in 41.716ms. Jan 13 21:18:29.149828 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.111ms. Jan 13 21:18:29.149841 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:18:29.149856 systemd[1]: Detected virtualization kvm. Jan 13 21:18:29.149869 systemd[1]: Detected architecture x86-64. Jan 13 21:18:29.149900 systemd[1]: Detected first boot. Jan 13 21:18:29.149914 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:18:29.149926 zram_generator::config[1056]: No configuration found. Jan 13 21:18:29.149939 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:18:29.149951 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:18:29.149963 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:18:29.149979 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:18:29.149992 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:18:29.150004 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:18:29.150016 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:18:29.150028 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:18:29.150040 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:18:29.150053 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:18:29.150065 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:18:29.150077 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:18:29.150092 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:18:29.150106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:18:29.150120 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:18:29.150135 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:18:29.150147 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:18:29.150159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:18:29.150172 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:18:29.150184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:18:29.150203 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:18:29.150219 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:18:29.150232 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:18:29.150244 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:18:29.150256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:18:29.150268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:18:29.150281 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:18:29.150297 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:18:29.150310 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:18:29.150324 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:18:29.150336 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:18:29.150349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:18:29.150361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:18:29.150374 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:18:29.150392 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:18:29.150404 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:18:29.150416 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:18:29.150429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:29.150443 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:18:29.150464 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:18:29.150477 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:18:29.150490 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:18:29.150505 systemd[1]: Reached target machines.target - Containers. Jan 13 21:18:29.150521 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:18:29.150538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:29.150554 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:18:29.150570 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:18:29.150582 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:29.150594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:18:29.150606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:18:29.150618 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:18:29.150630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:18:29.150643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:18:29.150655 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:18:29.150670 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:18:29.150682 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:18:29.150694 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:18:29.150706 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:18:29.150718 kernel: fuse: init (API version 7.39) Jan 13 21:18:29.150731 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:18:29.150743 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:18:29.150755 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:18:29.150769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:18:29.150783 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:18:29.150795 systemd[1]: Stopped verity-setup.service. Jan 13 21:18:29.150810 kernel: loop: module loaded Jan 13 21:18:29.150827 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:29.150845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:18:29.150866 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:18:29.150898 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:18:29.150915 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:18:29.150931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:18:29.150947 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:18:29.150963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:18:29.150979 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:18:29.150995 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:18:29.151010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:29.151030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:29.151047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:18:29.151063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:18:29.151079 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:18:29.151096 kernel: ACPI: bus type drm_connector registered Jan 13 21:18:29.151119 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:18:29.151136 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:18:29.151177 systemd-journald[1126]: Collecting audit messages is disabled. Jan 13 21:18:29.151205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:18:29.151222 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:18:29.151238 systemd-journald[1126]: Journal started Jan 13 21:18:29.151270 systemd-journald[1126]: Runtime Journal (/run/log/journal/8016aa5202fb4c518c975d780c5dd862) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:18:28.853403 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:18:28.884569 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:18:28.885220 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:18:29.152694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:18:29.156035 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:18:29.157310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:18:29.159130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:18:29.160849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:18:29.162948 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:18:29.177921 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:18:29.187031 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:18:29.189892 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:18:29.191286 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:18:29.191330 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:18:29.193981 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:18:29.199130 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:18:29.202418 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:18:29.204149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:29.206215 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:18:29.209093 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:18:29.210323 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:18:29.212894 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:18:29.214265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:18:29.216945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:18:29.220092 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:18:29.227157 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:18:29.232259 systemd-journald[1126]: Time spent on flushing to /var/log/journal/8016aa5202fb4c518c975d780c5dd862 is 14.989ms for 999 entries. Jan 13 21:18:29.232259 systemd-journald[1126]: System Journal (/var/log/journal/8016aa5202fb4c518c975d780c5dd862) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:18:29.265074 systemd-journald[1126]: Received client request to flush runtime journal. Jan 13 21:18:29.232240 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:18:29.235153 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:18:29.235582 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:18:29.254107 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:18:29.256369 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:18:29.268898 kernel: loop0: detected capacity change from 0 to 142488 Jan 13 21:18:29.270170 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:18:29.273079 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:18:29.274904 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:18:29.285074 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:18:29.286911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:18:29.295519 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 13 21:18:29.295547 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 13 21:18:29.299705 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:18:29.301051 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:18:29.302530 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:18:29.304910 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:18:29.308279 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:18:29.318150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:18:29.328078 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:18:29.349611 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:18:29.359243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:18:29.369937 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 21:18:29.380867 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:18:29.380904 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:18:29.386660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:18:29.399918 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:18:29.413926 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:18:29.425896 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 21:18:29.432282 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:18:29.432936 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 13 21:18:29.439271 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:18:29.439372 systemd[1]: Reloading... Jan 13 21:18:29.493698 zram_generator::config[1223]: No configuration found. Jan 13 21:18:29.570593 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:18:29.617257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:18:29.668506 systemd[1]: Reloading finished in 228 ms. Jan 13 21:18:29.712559 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:18:29.714233 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:18:29.735245 systemd[1]: Starting ensure-sysext.service... Jan 13 21:18:29.740081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:18:29.743366 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:18:29.743378 systemd[1]: Reloading... Jan 13 21:18:29.762629 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:18:29.763394 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:18:29.764498 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:18:29.764866 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 21:18:29.765041 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 21:18:29.772094 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:18:29.772108 systemd-tmpfiles[1261]: Skipping /boot Jan 13 21:18:29.788304 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:18:29.788316 systemd-tmpfiles[1261]: Skipping /boot Jan 13 21:18:29.796904 zram_generator::config[1287]: No configuration found. Jan 13 21:18:29.914919 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:18:29.964666 systemd[1]: Reloading finished in 220 ms. Jan 13 21:18:29.984696 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:18:29.998567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:18:30.008417 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:18:30.011154 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:18:30.013621 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:18:30.017989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:18:30.024193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:18:30.028202 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:18:30.031766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:30.032042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:30.033661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:30.036565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:18:30.042174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:18:30.043609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:30.046051 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:18:30.047930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:30.049021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:30.049402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:30.051618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:18:30.052935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:18:30.057310 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:18:30.058624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:18:30.058733 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 13 21:18:30.066519 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:30.066764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:30.073382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:30.077082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:18:30.080955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:18:30.082355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:30.082528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:30.083818 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:18:30.086637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:30.087271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:30.090136 augenrules[1357]: No rules Jan 13 21:18:30.091726 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:18:30.094274 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:18:30.096389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:18:30.096587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:18:30.098587 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:18:30.098769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:18:30.100215 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:18:30.102229 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:18:30.118495 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:18:30.127994 systemd[1]: Finished ensure-sysext.service. Jan 13 21:18:30.137458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:30.137621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:30.143102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:30.148124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:18:30.150026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:30.154252 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1371) Jan 13 21:18:30.154555 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:18:30.156329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:18:30.160416 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:18:30.166099 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:18:30.167564 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:18:30.167599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:18:30.168319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:30.169223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:30.172324 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:18:30.172542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:18:30.180148 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:18:30.183088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:18:30.207855 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:18:30.219904 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:18:30.223043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:18:30.224001 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:18:30.234162 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:18:30.253448 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:18:30.261239 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 21:18:30.264214 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:18:30.264387 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:18:30.264586 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:18:30.268306 systemd-resolved[1330]: Positive Trust Anchors: Jan 13 21:18:30.268861 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:18:30.268974 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:18:30.274448 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:18:30.276482 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 13 21:18:30.279848 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:18:30.281408 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:18:30.283235 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:18:30.284797 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:18:30.290048 systemd-networkd[1397]: lo: Link UP Jan 13 21:18:30.290057 systemd-networkd[1397]: lo: Gained carrier Jan 13 21:18:30.292760 systemd-networkd[1397]: Enumeration completed Jan 13 21:18:30.293289 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:30.293639 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:18:30.294968 systemd-networkd[1397]: eth0: Link UP Jan 13 21:18:30.295270 systemd-networkd[1397]: eth0: Gained carrier Jan 13 21:18:30.295330 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:30.299962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:30.301494 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:18:30.303658 systemd[1]: Reached target network.target - Network. Jan 13 21:18:31.317092 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:18:30.304982 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:18:30.305957 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Jan 13 21:18:31.317484 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:18:31.317530 systemd-timesyncd[1398]: Initial clock synchronization to Mon 2025-01-13 21:18:31.317064 UTC. Jan 13 21:18:31.317564 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 13 21:18:31.318294 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:18:31.342359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:18:31.342667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:31.391320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:31.420903 kernel: kvm_amd: TSC scaling supported Jan 13 21:18:31.420984 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:18:31.420998 kernel: kvm_amd: Nested Paging enabled Jan 13 21:18:31.421011 kernel: kvm_amd: LBR virtualization supported Jan 13 21:18:31.421516 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:18:31.422696 kernel: kvm_amd: Virtual GIF supported Jan 13 21:18:31.444423 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:18:31.455908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:31.484680 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:18:31.496589 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:18:31.505961 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:18:31.546172 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:18:31.548677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:18:31.549830 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:18:31.551039 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:18:31.552341 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:18:31.553819 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:18:31.555087 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:18:31.556433 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:18:31.557749 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:18:31.557778 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:18:31.558693 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:18:31.560257 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:18:31.563153 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:18:31.578991 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:18:31.581466 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:18:31.583117 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:18:31.584362 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:18:31.585365 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:18:31.586368 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:18:31.586394 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:18:31.587374 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:18:31.589553 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:18:31.593599 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:18:31.593506 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:18:31.596582 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:18:31.598410 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:18:31.602573 jq[1440]: false Jan 13 21:18:31.602609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:18:31.605512 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:18:31.610910 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:18:31.616552 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:18:31.626450 dbus-daemon[1439]: [system] SELinux support is enabled Jan 13 21:18:31.629112 extend-filesystems[1441]: Found loop3 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found loop4 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found loop5 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found sr0 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda1 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda2 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda3 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found usr Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda4 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda6 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda7 Jan 13 21:18:31.629112 extend-filesystems[1441]: Found vda9 Jan 13 21:18:31.629112 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 13 21:18:31.628549 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:18:31.631064 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:18:31.632973 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:18:31.635696 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:18:31.639105 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:18:31.641461 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:18:31.645854 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:18:31.649972 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:18:31.650212 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:18:31.651610 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:18:31.651825 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:18:31.653127 jq[1457]: true Jan 13 21:18:31.657254 update_engine[1455]: I20250113 21:18:31.657092 1455 main.cc:92] Flatcar Update Engine starting Jan 13 21:18:31.657725 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:18:31.657998 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:18:31.658941 update_engine[1455]: I20250113 21:18:31.658789 1455 update_check_scheduler.cc:74] Next update check in 4m43s Jan 13 21:18:31.668597 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:18:31.677018 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 13 21:18:31.680949 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:18:31.680997 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:18:31.681739 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:18:31.684302 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:18:31.686409 jq[1462]: true Jan 13 21:18:31.684345 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:18:31.691354 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1385) Jan 13 21:18:31.699110 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:18:31.699143 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:18:31.700495 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:18:31.700902 systemd-logind[1452]: New seat seat0. Jan 13 21:18:31.714557 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:18:31.729804 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:18:31.756009 tar[1461]: linux-amd64/helm Jan 13 21:18:31.785602 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:18:31.862277 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:18:32.058309 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:18:32.083271 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:18:32.090711 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:18:32.092350 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:18:32.102560 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:18:32.102825 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:18:32.105926 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:18:32.820809 containerd[1463]: time="2025-01-13T21:18:32.820543407Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:18:32.126512 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:18:32.821030 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:18:32.821030 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:18:32.821030 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:18:32.137753 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:18:32.825915 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 13 21:18:32.140236 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:18:32.141515 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:18:32.406629 systemd-networkd[1397]: eth0: Gained IPv6LL Jan 13 21:18:32.410143 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:18:32.411973 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:18:32.422534 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:18:32.827560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:32.831641 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:18:32.833501 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:18:32.834194 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:18:32.860220 containerd[1463]: time="2025-01-13T21:18:32.860086775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862132311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862187505Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862211951Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862465757Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862492727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862579450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862595440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862852772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862874423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862899390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863474 containerd[1463]: time="2025-01-13T21:18:32.862912585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:32.863776 containerd[1463]: time="2025-01-13T21:18:32.863034964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:32.864167 containerd[1463]: time="2025-01-13T21:18:32.864142942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:32.864396 containerd[1463]: time="2025-01-13T21:18:32.864374466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:32.864461 containerd[1463]: time="2025-01-13T21:18:32.864446161Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:18:32.864634 containerd[1463]: time="2025-01-13T21:18:32.864614556Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:18:32.864770 containerd[1463]: time="2025-01-13T21:18:32.864752866Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:18:32.870080 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:18:32.872119 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:18:32.872414 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:18:32.875925 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:18:32.981593 tar[1461]: linux-amd64/LICENSE Jan 13 21:18:32.982056 tar[1461]: linux-amd64/README.md Jan 13 21:18:32.994950 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:18:33.132721 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:18:33.135073 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:18:33.137309 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:18:33.354701 containerd[1463]: time="2025-01-13T21:18:33.354610353Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:18:33.354701 containerd[1463]: time="2025-01-13T21:18:33.354704059Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:18:33.354701 containerd[1463]: time="2025-01-13T21:18:33.354722394Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:18:33.354890 containerd[1463]: time="2025-01-13T21:18:33.354738905Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:18:33.354890 containerd[1463]: time="2025-01-13T21:18:33.354755456Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:18:33.355064 containerd[1463]: time="2025-01-13T21:18:33.355032696Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:18:33.355332 containerd[1463]: time="2025-01-13T21:18:33.355301039Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:18:33.355507 containerd[1463]: time="2025-01-13T21:18:33.355483351Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:18:33.355543 containerd[1463]: time="2025-01-13T21:18:33.355505492Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:18:33.355543 containerd[1463]: time="2025-01-13T21:18:33.355521282Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:18:33.355543 containerd[1463]: time="2025-01-13T21:18:33.355538714Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355612 containerd[1463]: time="2025-01-13T21:18:33.355555005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355612 containerd[1463]: time="2025-01-13T21:18:33.355566967Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355612 containerd[1463]: time="2025-01-13T21:18:33.355580323Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355612 containerd[1463]: time="2025-01-13T21:18:33.355594890Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355612 containerd[1463]: time="2025-01-13T21:18:33.355608956Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355712 containerd[1463]: time="2025-01-13T21:18:33.355622612Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355712 containerd[1463]: time="2025-01-13T21:18:33.355639083Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:18:33.355712 containerd[1463]: time="2025-01-13T21:18:33.355663018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355712 containerd[1463]: time="2025-01-13T21:18:33.355677385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355712 containerd[1463]: time="2025-01-13T21:18:33.355700017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355807 containerd[1463]: time="2025-01-13T21:18:33.355717670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355807 containerd[1463]: time="2025-01-13T21:18:33.355731316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355807 containerd[1463]: time="2025-01-13T21:18:33.355746294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355807 containerd[1463]: time="2025-01-13T21:18:33.355758256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355807 containerd[1463]: time="2025-01-13T21:18:33.355770930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355807 containerd[1463]: time="2025-01-13T21:18:33.355787731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355807 containerd[1463]: time="2025-01-13T21:18:33.355806066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355947 containerd[1463]: time="2025-01-13T21:18:33.355828788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355947 containerd[1463]: time="2025-01-13T21:18:33.355841202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355947 containerd[1463]: time="2025-01-13T21:18:33.355856651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355947 containerd[1463]: time="2025-01-13T21:18:33.355875787Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:18:33.355947 containerd[1463]: time="2025-01-13T21:18:33.355894862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355947 containerd[1463]: time="2025-01-13T21:18:33.355914670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.355947 containerd[1463]: time="2025-01-13T21:18:33.355927263Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.355984030Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.356002815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.356015128Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.356027611Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.356037089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.356049603Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.356059551Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:18:33.356072 containerd[1463]: time="2025-01-13T21:18:33.356070602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:18:33.356425 containerd[1463]: time="2025-01-13T21:18:33.356363501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:18:33.356425 containerd[1463]: time="2025-01-13T21:18:33.356426089Z" level=info msg="Connect containerd service" Jan 13 21:18:33.356599 containerd[1463]: time="2025-01-13T21:18:33.356461986Z" level=info msg="using legacy CRI server" Jan 13 21:18:33.356599 containerd[1463]: time="2025-01-13T21:18:33.356469630Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:18:33.356599 containerd[1463]: time="2025-01-13T21:18:33.356590347Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:18:33.358365 containerd[1463]: time="2025-01-13T21:18:33.358304331Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:18:33.358632 containerd[1463]: time="2025-01-13T21:18:33.358584306Z" level=info msg="Start subscribing containerd event" Jan 13 21:18:33.358656 containerd[1463]: time="2025-01-13T21:18:33.358641844Z" level=info msg="Start recovering state" Jan 13 21:18:33.358735 containerd[1463]: time="2025-01-13T21:18:33.358700374Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:18:33.358763 containerd[1463]: time="2025-01-13T21:18:33.358713027Z" level=info msg="Start event monitor" Jan 13 21:18:33.358763 containerd[1463]: time="2025-01-13T21:18:33.358758673Z" level=info msg="Start snapshots syncer" Jan 13 21:18:33.359463 containerd[1463]: time="2025-01-13T21:18:33.358768451Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:18:33.359463 containerd[1463]: time="2025-01-13T21:18:33.358768131Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:18:33.359463 containerd[1463]: time="2025-01-13T21:18:33.358956143Z" level=info msg="Start streaming server" Jan 13 21:18:33.359463 containerd[1463]: time="2025-01-13T21:18:33.359398824Z" level=info msg="containerd successfully booted in 1.180408s" Jan 13 21:18:33.359228 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:18:33.667241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:33.669160 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:18:33.670541 systemd[1]: Startup finished in 1.464s (kernel) + 6.528s (initrd) + 4.437s (userspace) = 12.431s. Jan 13 21:18:33.673210 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:34.151895 kubelet[1551]: E0113 21:18:34.151692 1551 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:34.156652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:34.156885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:35.541927 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:18:35.543353 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:46832.service - OpenSSH per-connection server daemon (10.0.0.1:46832). Jan 13 21:18:35.590130 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 46832 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:18:35.592354 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:35.601994 systemd-logind[1452]: New session 1 of user core. Jan 13 21:18:35.603414 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:18:35.613651 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:18:35.626524 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:18:35.629534 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:18:35.638462 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:18:35.743165 systemd[1570]: Queued start job for default target default.target. Jan 13 21:18:35.758110 systemd[1570]: Created slice app.slice - User Application Slice. Jan 13 21:18:35.758139 systemd[1570]: Reached target paths.target - Paths. Jan 13 21:18:35.758155 systemd[1570]: Reached target timers.target - Timers. Jan 13 21:18:35.761198 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:18:35.775003 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:18:35.775155 systemd[1570]: Reached target sockets.target - Sockets. Jan 13 21:18:35.775174 systemd[1570]: Reached target basic.target - Basic System. Jan 13 21:18:35.775218 systemd[1570]: Reached target default.target - Main User Target. Jan 13 21:18:35.775254 systemd[1570]: Startup finished in 129ms. Jan 13 21:18:35.775648 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:18:35.777429 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:18:35.839217 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:46838.service - OpenSSH per-connection server daemon (10.0.0.1:46838). Jan 13 21:18:35.876430 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 46838 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:18:35.878130 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:35.882543 systemd-logind[1452]: New session 2 of user core. Jan 13 21:18:35.893486 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:18:35.950257 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:35.961279 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:46838.service: Deactivated successfully. Jan 13 21:18:35.963099 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:18:35.964547 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:18:35.976775 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:46844.service - OpenSSH per-connection server daemon (10.0.0.1:46844). Jan 13 21:18:35.977788 systemd-logind[1452]: Removed session 2. Jan 13 21:18:36.010417 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 46844 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:18:36.012102 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:36.016430 systemd-logind[1452]: New session 3 of user core. Jan 13 21:18:36.024493 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:18:36.075438 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:36.093876 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:46844.service: Deactivated successfully. Jan 13 21:18:36.095941 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:18:36.098093 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:18:36.110859 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:46858.service - OpenSSH per-connection server daemon (10.0.0.1:46858). Jan 13 21:18:36.112020 systemd-logind[1452]: Removed session 3. Jan 13 21:18:36.141018 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 46858 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:18:36.142664 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:36.147057 systemd-logind[1452]: New session 4 of user core. Jan 13 21:18:36.156521 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:18:36.212588 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:36.220138 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:46858.service: Deactivated successfully. Jan 13 21:18:36.221832 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:18:36.223408 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:18:36.233654 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:46868.service - OpenSSH per-connection server daemon (10.0.0.1:46868). Jan 13 21:18:36.234867 systemd-logind[1452]: Removed session 4. Jan 13 21:18:36.267272 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 46868 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:18:36.269247 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:36.273533 systemd-logind[1452]: New session 5 of user core. Jan 13 21:18:36.283516 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:18:36.344691 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:18:36.345064 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:36.363878 sudo[1605]: pam_unix(sudo:session): session closed for user root Jan 13 21:18:36.366208 sshd[1602]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:36.378854 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:46868.service: Deactivated successfully. Jan 13 21:18:36.380972 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:18:36.382668 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:18:36.389917 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:46872.service - OpenSSH per-connection server daemon (10.0.0.1:46872). Jan 13 21:18:36.391065 systemd-logind[1452]: Removed session 5. Jan 13 21:18:36.420596 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 46872 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:18:36.422706 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:36.427713 systemd-logind[1452]: New session 6 of user core. Jan 13 21:18:36.437591 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:18:36.493590 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:18:36.494015 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:36.498112 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 13 21:18:36.504804 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:18:36.505150 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:36.523654 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:18:36.525898 auditctl[1617]: No rules Jan 13 21:18:36.526446 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:18:36.526747 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:18:36.529777 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:18:36.562070 augenrules[1635]: No rules Jan 13 21:18:36.564150 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:18:36.566009 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 13 21:18:36.568578 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:36.587188 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:46872.service: Deactivated successfully. Jan 13 21:18:36.589633 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:18:36.591536 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:18:36.613939 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:46886.service - OpenSSH per-connection server daemon (10.0.0.1:46886). Jan 13 21:18:36.615171 systemd-logind[1452]: Removed session 6. Jan 13 21:18:36.644796 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 46886 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:18:36.646514 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:36.651031 systemd-logind[1452]: New session 7 of user core. Jan 13 21:18:36.660578 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:18:36.715032 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:18:36.715404 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:37.007741 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:18:37.008100 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:18:37.301494 dockerd[1664]: time="2025-01-13T21:18:37.301293683Z" level=info msg="Starting up" Jan 13 21:18:37.809962 dockerd[1664]: time="2025-01-13T21:18:37.809866900Z" level=info msg="Loading containers: start." Jan 13 21:18:37.934364 kernel: Initializing XFRM netlink socket Jan 13 21:18:38.015061 systemd-networkd[1397]: docker0: Link UP Jan 13 21:18:38.043355 dockerd[1664]: time="2025-01-13T21:18:38.043274021Z" level=info msg="Loading containers: done." Jan 13 21:18:38.058763 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2729938882-merged.mount: Deactivated successfully. Jan 13 21:18:38.061674 dockerd[1664]: time="2025-01-13T21:18:38.061558682Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:18:38.061786 dockerd[1664]: time="2025-01-13T21:18:38.061695769Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:18:38.061915 dockerd[1664]: time="2025-01-13T21:18:38.061867531Z" level=info msg="Daemon has completed initialization" Jan 13 21:18:38.101846 dockerd[1664]: time="2025-01-13T21:18:38.101753201Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:18:38.102506 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:18:39.107622 containerd[1463]: time="2025-01-13T21:18:39.107543146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:18:39.720649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738536472.mount: Deactivated successfully. Jan 13 21:18:41.558667 containerd[1463]: time="2025-01-13T21:18:41.558547153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:41.559554 containerd[1463]: time="2025-01-13T21:18:41.559190360Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 21:18:41.577991 containerd[1463]: time="2025-01-13T21:18:41.577939601Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:41.581430 containerd[1463]: time="2025-01-13T21:18:41.581368281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:41.582319 containerd[1463]: time="2025-01-13T21:18:41.582264081Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.474662005s" Jan 13 21:18:41.582369 containerd[1463]: time="2025-01-13T21:18:41.582318373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:18:41.610266 containerd[1463]: time="2025-01-13T21:18:41.610207214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:18:43.694464 containerd[1463]: time="2025-01-13T21:18:43.694361608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:43.695278 containerd[1463]: time="2025-01-13T21:18:43.695224185Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 21:18:43.696639 containerd[1463]: time="2025-01-13T21:18:43.696584126Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:43.699355 containerd[1463]: time="2025-01-13T21:18:43.699293276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:43.700185 containerd[1463]: time="2025-01-13T21:18:43.700143411Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.08987395s" Jan 13 21:18:43.700185 containerd[1463]: time="2025-01-13T21:18:43.700181232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:18:43.725195 containerd[1463]: time="2025-01-13T21:18:43.725131703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:18:44.407290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:18:44.416541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:44.571786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:44.579217 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:44.726159 kubelet[1897]: E0113 21:18:44.725955 1897 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:44.736525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:44.736770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:44.946358 containerd[1463]: time="2025-01-13T21:18:44.946275483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:44.947214 containerd[1463]: time="2025-01-13T21:18:44.947161545Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 21:18:44.948606 containerd[1463]: time="2025-01-13T21:18:44.948563214Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:44.951817 containerd[1463]: time="2025-01-13T21:18:44.951779305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:44.952991 containerd[1463]: time="2025-01-13T21:18:44.952959769Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.227787911s" Jan 13 21:18:44.953044 containerd[1463]: time="2025-01-13T21:18:44.952994694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:18:44.988144 containerd[1463]: time="2025-01-13T21:18:44.987939348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:18:46.851654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390041433.mount: Deactivated successfully. Jan 13 21:18:48.480758 containerd[1463]: time="2025-01-13T21:18:48.480628968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:48.492612 containerd[1463]: time="2025-01-13T21:18:48.492494470Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:18:48.513133 containerd[1463]: time="2025-01-13T21:18:48.513093661Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:48.531983 containerd[1463]: time="2025-01-13T21:18:48.531920948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:48.532732 containerd[1463]: time="2025-01-13T21:18:48.532681264Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 3.544693345s" Jan 13 21:18:48.532732 containerd[1463]: time="2025-01-13T21:18:48.532727381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:18:48.558458 containerd[1463]: time="2025-01-13T21:18:48.558404254Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:18:49.149552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569185904.mount: Deactivated successfully. Jan 13 21:18:50.675849 containerd[1463]: time="2025-01-13T21:18:50.675770852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:50.676577 containerd[1463]: time="2025-01-13T21:18:50.676479411Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:18:50.677560 containerd[1463]: time="2025-01-13T21:18:50.677516927Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:50.680521 containerd[1463]: time="2025-01-13T21:18:50.680486726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:50.681627 containerd[1463]: time="2025-01-13T21:18:50.681569918Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.123122903s" Jan 13 21:18:50.681627 containerd[1463]: time="2025-01-13T21:18:50.681607398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:18:50.710170 containerd[1463]: time="2025-01-13T21:18:50.710140287Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:18:51.374609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270731999.mount: Deactivated successfully. Jan 13 21:18:51.381448 containerd[1463]: time="2025-01-13T21:18:51.381407423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:51.382263 containerd[1463]: time="2025-01-13T21:18:51.382201863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:18:51.383448 containerd[1463]: time="2025-01-13T21:18:51.383407463Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:51.386022 containerd[1463]: time="2025-01-13T21:18:51.385996158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:51.386798 containerd[1463]: time="2025-01-13T21:18:51.386752907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 676.579779ms" Jan 13 21:18:51.386798 containerd[1463]: time="2025-01-13T21:18:51.386794095Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:18:51.411703 containerd[1463]: time="2025-01-13T21:18:51.411647093Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:18:52.013501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761777333.mount: Deactivated successfully. Jan 13 21:18:53.416179 containerd[1463]: time="2025-01-13T21:18:53.416114334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:53.417068 containerd[1463]: time="2025-01-13T21:18:53.416962725Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 21:18:53.418416 containerd[1463]: time="2025-01-13T21:18:53.418370746Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:53.421806 containerd[1463]: time="2025-01-13T21:18:53.421728453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:53.422678 containerd[1463]: time="2025-01-13T21:18:53.422644230Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.010949809s" Jan 13 21:18:53.422721 containerd[1463]: time="2025-01-13T21:18:53.422677984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:18:54.987266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:18:55.002316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:55.163960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:55.171719 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:55.219091 kubelet[2121]: E0113 21:18:55.219001 2121 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:55.225612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:55.225932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:55.512261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:55.521877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:55.544698 systemd[1]: Reloading requested from client PID 2137 ('systemctl') (unit session-7.scope)... Jan 13 21:18:55.544744 systemd[1]: Reloading... Jan 13 21:18:55.660355 zram_generator::config[2176]: No configuration found. Jan 13 21:18:56.136743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:18:56.216313 systemd[1]: Reloading finished in 670 ms. Jan 13 21:18:56.263238 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:18:56.263434 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:18:56.263805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:56.265962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:56.436813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:56.442356 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:18:56.487179 kubelet[2225]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:18:56.487179 kubelet[2225]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:18:56.487179 kubelet[2225]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:18:56.487766 kubelet[2225]: I0113 21:18:56.487216 2225 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:18:56.739906 kubelet[2225]: I0113 21:18:56.739705 2225 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:18:56.739906 kubelet[2225]: I0113 21:18:56.739744 2225 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:18:56.740124 kubelet[2225]: I0113 21:18:56.739959 2225 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:18:56.756256 kubelet[2225]: E0113 21:18:56.756175 2225 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.756941 kubelet[2225]: I0113 21:18:56.756897 2225 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:18:56.769101 kubelet[2225]: I0113 21:18:56.769058 2225 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:18:56.770353 kubelet[2225]: I0113 21:18:56.770301 2225 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:18:56.770543 kubelet[2225]: I0113 21:18:56.770501 2225 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:18:56.770543 kubelet[2225]: I0113 21:18:56.770533 2225 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:18:56.770543 kubelet[2225]: I0113 21:18:56.770542 2225 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:18:56.770774 kubelet[2225]: I0113 21:18:56.770673 2225 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:18:56.770814 kubelet[2225]: I0113 21:18:56.770785 2225 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:18:56.770814 kubelet[2225]: I0113 21:18:56.770800 2225 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:18:56.770883 kubelet[2225]: I0113 21:18:56.770833 2225 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:18:56.770883 kubelet[2225]: I0113 21:18:56.770852 2225 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:18:56.772230 kubelet[2225]: I0113 21:18:56.772197 2225 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:18:56.773011 kubelet[2225]: W0113 21:18:56.772901 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.773011 kubelet[2225]: E0113 21:18:56.772970 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.773199 kubelet[2225]: W0113 21:18:56.773131 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.773199 kubelet[2225]: E0113 21:18:56.773194 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.774881 kubelet[2225]: I0113 21:18:56.774839 2225 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:18:56.775800 kubelet[2225]: W0113 21:18:56.775764 2225 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:18:56.776485 kubelet[2225]: I0113 21:18:56.776454 2225 server.go:1256] "Started kubelet" Jan 13 21:18:56.780810 kubelet[2225]: I0113 21:18:56.780138 2225 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:18:56.780810 kubelet[2225]: I0113 21:18:56.780717 2225 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:18:56.781913 kubelet[2225]: I0113 21:18:56.781445 2225 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:18:56.781913 kubelet[2225]: I0113 21:18:56.781657 2225 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:18:56.784497 kubelet[2225]: E0113 21:18:56.783180 2225 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d403c57e781 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:18:56.776431489 +0000 UTC m=+0.329508497,LastTimestamp:2025-01-13 21:18:56.776431489 +0000 UTC m=+0.329508497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:18:56.784497 kubelet[2225]: I0113 21:18:56.783275 2225 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:18:56.784497 kubelet[2225]: I0113 21:18:56.783461 2225 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:18:56.784497 kubelet[2225]: I0113 21:18:56.783564 2225 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:18:56.784497 kubelet[2225]: W0113 21:18:56.783955 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.784497 kubelet[2225]: E0113 21:18:56.784003 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.784497 kubelet[2225]: E0113 21:18:56.784316 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Jan 13 21:18:56.786354 kubelet[2225]: I0113 21:18:56.786295 2225 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:18:56.788495 kubelet[2225]: E0113 21:18:56.788289 2225 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:18:56.789123 kubelet[2225]: I0113 21:18:56.789064 2225 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:18:56.789123 kubelet[2225]: I0113 21:18:56.789091 2225 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:18:56.789543 kubelet[2225]: I0113 21:18:56.789206 2225 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:18:56.805448 kubelet[2225]: I0113 21:18:56.805409 2225 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:18:56.805448 kubelet[2225]: I0113 21:18:56.805437 2225 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:18:56.805448 kubelet[2225]: I0113 21:18:56.805459 2225 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:18:56.806108 kubelet[2225]: I0113 21:18:56.806077 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:18:56.807998 kubelet[2225]: I0113 21:18:56.807908 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:18:56.807998 kubelet[2225]: I0113 21:18:56.807973 2225 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:18:56.808197 kubelet[2225]: I0113 21:18:56.808135 2225 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:18:56.808260 kubelet[2225]: E0113 21:18:56.808209 2225 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:18:56.809025 kubelet[2225]: W0113 21:18:56.808922 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.809025 kubelet[2225]: E0113 21:18:56.808983 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:56.885739 kubelet[2225]: I0113 21:18:56.885690 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:18:56.886171 kubelet[2225]: E0113 21:18:56.886147 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 13 21:18:56.908544 kubelet[2225]: E0113 21:18:56.908455 2225 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:18:56.971362 kubelet[2225]: E0113 21:18:56.971192 2225 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d403c57e781 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:18:56.776431489 +0000 UTC m=+0.329508497,LastTimestamp:2025-01-13 21:18:56.776431489 +0000 UTC m=+0.329508497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:18:56.985846 kubelet[2225]: E0113 21:18:56.985770 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Jan 13 21:18:57.015810 kubelet[2225]: I0113 21:18:57.015561 2225 policy_none.go:49] "None policy: Start" Jan 13 21:18:57.017117 kubelet[2225]: I0113 21:18:57.016990 2225 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:18:57.017117 kubelet[2225]: I0113 21:18:57.017034 2225 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:18:57.026176 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:18:57.047483 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:18:57.051394 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:18:57.062921 kubelet[2225]: I0113 21:18:57.062834 2225 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:18:57.063534 kubelet[2225]: I0113 21:18:57.063288 2225 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:18:57.065057 kubelet[2225]: E0113 21:18:57.065029 2225 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:18:57.087622 kubelet[2225]: I0113 21:18:57.087573 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:18:57.087913 kubelet[2225]: E0113 21:18:57.087876 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 13 21:18:57.109221 kubelet[2225]: I0113 21:18:57.109154 2225 topology_manager.go:215] "Topology Admit Handler" podUID="82150330566e9deab89c4a067bdbe6c3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:18:57.110779 kubelet[2225]: I0113 21:18:57.110753 2225 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:18:57.111890 kubelet[2225]: I0113 21:18:57.111847 2225 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:18:57.119412 systemd[1]: Created slice kubepods-burstable-pod82150330566e9deab89c4a067bdbe6c3.slice - libcontainer container kubepods-burstable-pod82150330566e9deab89c4a067bdbe6c3.slice. Jan 13 21:18:57.140168 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 21:18:57.157396 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 21:18:57.187004 kubelet[2225]: I0113 21:18:57.186967 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:57.187004 kubelet[2225]: I0113 21:18:57.187007 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:57.187147 kubelet[2225]: I0113 21:18:57.187026 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:57.187147 kubelet[2225]: I0113 21:18:57.187043 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:57.187147 kubelet[2225]: I0113 21:18:57.187064 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:57.187147 kubelet[2225]: I0113 21:18:57.187086 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:57.187266 kubelet[2225]: I0113 21:18:57.187164 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:57.187295 kubelet[2225]: I0113 21:18:57.187264 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:18:57.187346 kubelet[2225]: I0113 21:18:57.187320 2225 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:57.386987 kubelet[2225]: E0113 21:18:57.386791 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Jan 13 21:18:57.438757 kubelet[2225]: E0113 21:18:57.438686 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:57.439646 containerd[1463]: time="2025-01-13T21:18:57.439567004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82150330566e9deab89c4a067bdbe6c3,Namespace:kube-system,Attempt:0,}" Jan 13 21:18:57.456091 kubelet[2225]: E0113 21:18:57.456021 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:57.456857 containerd[1463]: time="2025-01-13T21:18:57.456801996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 21:18:57.460175 kubelet[2225]: E0113 21:18:57.460137 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:57.460801 containerd[1463]: time="2025-01-13T21:18:57.460740983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 21:18:57.489756 kubelet[2225]: I0113 21:18:57.489711 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:18:57.490407 kubelet[2225]: E0113 21:18:57.490069 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 13 21:18:57.701630 kubelet[2225]: W0113 21:18:57.701316 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:57.701630 kubelet[2225]: E0113 21:18:57.701439 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:57.726124 kubelet[2225]: W0113 21:18:57.726032 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:57.726124 kubelet[2225]: E0113 21:18:57.726105 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:58.081124 kubelet[2225]: W0113 21:18:58.080923 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:58.081124 kubelet[2225]: E0113 21:18:58.080991 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:58.131225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114802870.mount: Deactivated successfully. Jan 13 21:18:58.143568 containerd[1463]: time="2025-01-13T21:18:58.142640989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:58.152106 containerd[1463]: time="2025-01-13T21:18:58.151893521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:18:58.153884 containerd[1463]: time="2025-01-13T21:18:58.153794796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:58.159753 containerd[1463]: time="2025-01-13T21:18:58.156597422Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:58.159753 containerd[1463]: time="2025-01-13T21:18:58.158145766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:58.159753 containerd[1463]: time="2025-01-13T21:18:58.159605253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:18:58.160023 containerd[1463]: time="2025-01-13T21:18:58.159937737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:18:58.164231 containerd[1463]: time="2025-01-13T21:18:58.164117345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:58.165600 containerd[1463]: time="2025-01-13T21:18:58.165309841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 725.630666ms" Jan 13 21:18:58.169348 containerd[1463]: time="2025-01-13T21:18:58.169259027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 708.423517ms" Jan 13 21:18:58.170543 containerd[1463]: time="2025-01-13T21:18:58.170471080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 713.563046ms" Jan 13 21:18:58.188067 kubelet[2225]: E0113 21:18:58.188020 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Jan 13 21:18:58.293510 kubelet[2225]: I0113 21:18:58.293459 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:18:58.293989 kubelet[2225]: E0113 21:18:58.293942 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 13 21:18:58.311786 kubelet[2225]: W0113 21:18:58.311677 2225 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:58.311786 kubelet[2225]: E0113 21:18:58.311747 2225 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jan 13 21:18:58.374292 containerd[1463]: time="2025-01-13T21:18:58.373811376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:18:58.374292 containerd[1463]: time="2025-01-13T21:18:58.373946109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:18:58.374292 containerd[1463]: time="2025-01-13T21:18:58.373987396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:58.376693 containerd[1463]: time="2025-01-13T21:18:58.376600396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:58.378600 containerd[1463]: time="2025-01-13T21:18:58.378371217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:18:58.378600 containerd[1463]: time="2025-01-13T21:18:58.378457770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:18:58.378600 containerd[1463]: time="2025-01-13T21:18:58.378504247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:58.378887 containerd[1463]: time="2025-01-13T21:18:58.378633940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:58.380883 containerd[1463]: time="2025-01-13T21:18:58.380551196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:18:58.380883 containerd[1463]: time="2025-01-13T21:18:58.380620726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:18:58.380883 containerd[1463]: time="2025-01-13T21:18:58.380636676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:58.380883 containerd[1463]: time="2025-01-13T21:18:58.380733758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:58.410769 systemd[1]: Started cri-containerd-095752d8cd9fe4c47a20f79fd835a0f2a1c8049c9dba37659f51a9c370dcd2b8.scope - libcontainer container 095752d8cd9fe4c47a20f79fd835a0f2a1c8049c9dba37659f51a9c370dcd2b8. Jan 13 21:18:58.413101 systemd[1]: Started cri-containerd-a3738e242fe195437f3755339f559bc9231b2d8a5717b9f9eda00af85b71bb35.scope - libcontainer container a3738e242fe195437f3755339f559bc9231b2d8a5717b9f9eda00af85b71bb35. Jan 13 21:18:58.415891 systemd[1]: Started cri-containerd-d88deb2a0c6586f298c6ede461f2700ca637172282941f82a360815b07854efb.scope - libcontainer container d88deb2a0c6586f298c6ede461f2700ca637172282941f82a360815b07854efb. Jan 13 21:18:58.462859 containerd[1463]: time="2025-01-13T21:18:58.462782619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"095752d8cd9fe4c47a20f79fd835a0f2a1c8049c9dba37659f51a9c370dcd2b8\"" Jan 13 21:18:58.465517 kubelet[2225]: E0113 21:18:58.465478 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:58.469186 containerd[1463]: time="2025-01-13T21:18:58.469139079Z" level=info msg="CreateContainer within sandbox \"095752d8cd9fe4c47a20f79fd835a0f2a1c8049c9dba37659f51a9c370dcd2b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:18:58.474530 containerd[1463]: time="2025-01-13T21:18:58.474383164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82150330566e9deab89c4a067bdbe6c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3738e242fe195437f3755339f559bc9231b2d8a5717b9f9eda00af85b71bb35\"" Jan 13 21:18:58.475467 kubelet[2225]: E0113 21:18:58.475436 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:58.479053 containerd[1463]: time="2025-01-13T21:18:58.478996696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d88deb2a0c6586f298c6ede461f2700ca637172282941f82a360815b07854efb\"" Jan 13 21:18:58.479542 kubelet[2225]: E0113 21:18:58.479522 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:58.479949 containerd[1463]: time="2025-01-13T21:18:58.479912163Z" level=info msg="CreateContainer within sandbox \"a3738e242fe195437f3755339f559bc9231b2d8a5717b9f9eda00af85b71bb35\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:18:58.485262 containerd[1463]: time="2025-01-13T21:18:58.485188328Z" level=info msg="CreateContainer within sandbox \"d88deb2a0c6586f298c6ede461f2700ca637172282941f82a360815b07854efb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:18:58.502354 containerd[1463]: time="2025-01-13T21:18:58.502255905Z" level=info msg="CreateContainer within sandbox \"095752d8cd9fe4c47a20f79fd835a0f2a1c8049c9dba37659f51a9c370dcd2b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0cc18fea279dc6cba21a9fdd9e9ff56b3efd9c76b95e18ee201f471cefe63759\"" Jan 13 21:18:58.503111 containerd[1463]: time="2025-01-13T21:18:58.503085531Z" level=info msg="StartContainer for \"0cc18fea279dc6cba21a9fdd9e9ff56b3efd9c76b95e18ee201f471cefe63759\"" Jan 13 21:18:58.509307 containerd[1463]: time="2025-01-13T21:18:58.509231898Z" level=info msg="CreateContainer within sandbox \"a3738e242fe195437f3755339f559bc9231b2d8a5717b9f9eda00af85b71bb35\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"361a9c20664e913a18743ab5e5399fc22cc3794caca380bad975eb802052bf9a\"" Jan 13 21:18:58.509879 containerd[1463]: time="2025-01-13T21:18:58.509849757Z" level=info msg="StartContainer for \"361a9c20664e913a18743ab5e5399fc22cc3794caca380bad975eb802052bf9a\"" Jan 13 21:18:58.514845 containerd[1463]: time="2025-01-13T21:18:58.514578655Z" level=info msg="CreateContainer within sandbox \"d88deb2a0c6586f298c6ede461f2700ca637172282941f82a360815b07854efb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d330db6874617bd9b3a360762b6073a01258091e67064a6f61236436d9467ee2\"" Jan 13 21:18:58.515564 containerd[1463]: time="2025-01-13T21:18:58.515524238Z" level=info msg="StartContainer for \"d330db6874617bd9b3a360762b6073a01258091e67064a6f61236436d9467ee2\"" Jan 13 21:18:58.544670 systemd[1]: Started cri-containerd-0cc18fea279dc6cba21a9fdd9e9ff56b3efd9c76b95e18ee201f471cefe63759.scope - libcontainer container 0cc18fea279dc6cba21a9fdd9e9ff56b3efd9c76b95e18ee201f471cefe63759. Jan 13 21:18:58.551629 systemd[1]: Started cri-containerd-361a9c20664e913a18743ab5e5399fc22cc3794caca380bad975eb802052bf9a.scope - libcontainer container 361a9c20664e913a18743ab5e5399fc22cc3794caca380bad975eb802052bf9a. Jan 13 21:18:58.553781 systemd[1]: Started cri-containerd-d330db6874617bd9b3a360762b6073a01258091e67064a6f61236436d9467ee2.scope - libcontainer container d330db6874617bd9b3a360762b6073a01258091e67064a6f61236436d9467ee2. Jan 13 21:18:58.623412 containerd[1463]: time="2025-01-13T21:18:58.622816009Z" level=info msg="StartContainer for \"0cc18fea279dc6cba21a9fdd9e9ff56b3efd9c76b95e18ee201f471cefe63759\" returns successfully" Jan 13 21:18:58.634262 containerd[1463]: time="2025-01-13T21:18:58.634100010Z" level=info msg="StartContainer for \"361a9c20664e913a18743ab5e5399fc22cc3794caca380bad975eb802052bf9a\" returns successfully" Jan 13 21:18:58.634262 containerd[1463]: time="2025-01-13T21:18:58.634225856Z" level=info msg="StartContainer for \"d330db6874617bd9b3a360762b6073a01258091e67064a6f61236436d9467ee2\" returns successfully" Jan 13 21:18:58.820440 kubelet[2225]: E0113 21:18:58.820273 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:58.823381 kubelet[2225]: E0113 21:18:58.821358 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:58.824805 kubelet[2225]: E0113 21:18:58.824777 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:59.826089 kubelet[2225]: E0113 21:18:59.826033 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:59.898456 kubelet[2225]: I0113 21:18:59.897958 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:00.031009 kubelet[2225]: I0113 21:19:00.030958 2225 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:19:00.042578 kubelet[2225]: E0113 21:19:00.042524 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:00.086277 kubelet[2225]: E0113 21:19:00.086118 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 21:19:00.143504 kubelet[2225]: E0113 21:19:00.143429 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:00.244580 kubelet[2225]: E0113 21:19:00.244504 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:00.345378 kubelet[2225]: E0113 21:19:00.345206 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:00.445849 kubelet[2225]: E0113 21:19:00.445796 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:00.546401 kubelet[2225]: E0113 21:19:00.546352 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:00.647115 kubelet[2225]: E0113 21:19:00.646953 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:00.747657 kubelet[2225]: E0113 21:19:00.747581 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:19:01.774312 kubelet[2225]: I0113 21:19:01.774270 2225 apiserver.go:52] "Watching apiserver" Jan 13 21:19:01.784700 kubelet[2225]: I0113 21:19:01.784631 2225 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:19:03.402284 systemd[1]: Reloading requested from client PID 2507 ('systemctl') (unit session-7.scope)... Jan 13 21:19:03.402305 systemd[1]: Reloading... Jan 13 21:19:03.510369 zram_generator::config[2549]: No configuration found. Jan 13 21:19:03.627358 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:19:03.729216 systemd[1]: Reloading finished in 326 ms. Jan 13 21:19:03.795093 kubelet[2225]: I0113 21:19:03.795010 2225 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:19:03.795433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:19:03.818596 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:19:03.819051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:03.829841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:19:04.008630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:19:04.018817 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:19:04.077118 kubelet[2591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:19:04.077118 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:19:04.077118 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:19:04.077538 kubelet[2591]: I0113 21:19:04.077149 2591 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:19:04.082569 kubelet[2591]: I0113 21:19:04.082532 2591 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:19:04.082569 kubelet[2591]: I0113 21:19:04.082564 2591 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:19:04.082789 kubelet[2591]: I0113 21:19:04.082770 2591 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:19:04.084660 kubelet[2591]: I0113 21:19:04.084627 2591 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:19:04.086920 kubelet[2591]: I0113 21:19:04.086818 2591 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:19:04.098487 kubelet[2591]: I0113 21:19:04.098438 2591 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:19:04.098908 kubelet[2591]: I0113 21:19:04.098883 2591 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:19:04.099171 kubelet[2591]: I0113 21:19:04.099114 2591 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:19:04.099267 kubelet[2591]: I0113 21:19:04.099189 2591 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:19:04.099267 kubelet[2591]: I0113 21:19:04.099204 2591 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:19:04.099267 kubelet[2591]: I0113 21:19:04.099256 2591 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:19:04.099415 kubelet[2591]: I0113 21:19:04.099401 2591 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:19:04.099471 kubelet[2591]: I0113 21:19:04.099422 2591 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:19:04.099471 kubelet[2591]: I0113 21:19:04.099455 2591 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:19:04.099526 kubelet[2591]: I0113 21:19:04.099478 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:19:04.100827 sudo[2606]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:19:04.101861 kubelet[2591]: I0113 21:19:04.100970 2591 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:19:04.101861 kubelet[2591]: I0113 21:19:04.101271 2591 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:19:04.101207 sudo[2606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:19:04.102059 kubelet[2591]: I0113 21:19:04.102050 2591 server.go:1256] "Started kubelet" Jan 13 21:19:04.103836 kubelet[2591]: I0113 21:19:04.102821 2591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:19:04.103836 kubelet[2591]: I0113 21:19:04.103188 2591 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:19:04.104098 kubelet[2591]: I0113 21:19:04.104077 2591 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:19:04.108814 kubelet[2591]: I0113 21:19:04.108779 2591 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:19:04.115782 kubelet[2591]: I0113 21:19:04.115731 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:19:04.122967 kubelet[2591]: I0113 21:19:04.122931 2591 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:19:04.123127 kubelet[2591]: I0113 21:19:04.123024 2591 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:19:04.123227 kubelet[2591]: I0113 21:19:04.123204 2591 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:19:04.123906 kubelet[2591]: E0113 21:19:04.123880 2591 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:19:04.124201 kubelet[2591]: I0113 21:19:04.124160 2591 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:19:04.124448 kubelet[2591]: I0113 21:19:04.124416 2591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:19:04.130115 kubelet[2591]: I0113 21:19:04.130081 2591 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:19:04.136288 kubelet[2591]: I0113 21:19:04.136250 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:19:04.138452 kubelet[2591]: I0113 21:19:04.138413 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:19:04.138452 kubelet[2591]: I0113 21:19:04.138450 2591 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:19:04.138607 kubelet[2591]: I0113 21:19:04.138472 2591 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:19:04.138607 kubelet[2591]: E0113 21:19:04.138559 2591 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:19:04.184684 kubelet[2591]: I0113 21:19:04.184627 2591 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:19:04.184684 kubelet[2591]: I0113 21:19:04.184679 2591 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:19:04.184684 kubelet[2591]: I0113 21:19:04.184698 2591 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:19:04.184887 kubelet[2591]: I0113 21:19:04.184836 2591 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:19:04.184887 kubelet[2591]: I0113 21:19:04.184855 2591 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:19:04.184887 kubelet[2591]: I0113 21:19:04.184880 2591 policy_none.go:49] "None policy: Start" Jan 13 21:19:04.186089 kubelet[2591]: I0113 21:19:04.185783 2591 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:19:04.186089 kubelet[2591]: I0113 21:19:04.185815 2591 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:19:04.186395 kubelet[2591]: I0113 21:19:04.186365 2591 state_mem.go:75] "Updated machine memory state" Jan 13 21:19:04.194164 kubelet[2591]: I0113 21:19:04.194141 2591 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:19:04.194687 kubelet[2591]: I0113 21:19:04.194670 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:19:04.228132 kubelet[2591]: I0113 21:19:04.228091 2591 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:19:04.239732 kubelet[2591]: I0113 21:19:04.239690 2591 topology_manager.go:215] "Topology Admit Handler" podUID="82150330566e9deab89c4a067bdbe6c3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:19:04.239848 kubelet[2591]: I0113 21:19:04.239796 2591 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:19:04.239848 kubelet[2591]: I0113 21:19:04.239836 2591 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:19:04.323742 kubelet[2591]: I0113 21:19:04.323599 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:04.323742 kubelet[2591]: I0113 21:19:04.323646 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:04.323742 kubelet[2591]: I0113 21:19:04.323666 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:04.323742 kubelet[2591]: I0113 21:19:04.323688 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:04.323742 kubelet[2591]: I0113 21:19:04.323706 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:04.323991 kubelet[2591]: I0113 21:19:04.323730 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82150330566e9deab89c4a067bdbe6c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82150330566e9deab89c4a067bdbe6c3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:04.323991 kubelet[2591]: I0113 21:19:04.323750 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:04.323991 kubelet[2591]: I0113 21:19:04.323774 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:19:04.323991 kubelet[2591]: I0113 21:19:04.323799 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:19:04.393728 kubelet[2591]: I0113 21:19:04.393546 2591 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:19:04.393728 kubelet[2591]: I0113 21:19:04.393632 2591 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:19:04.624284 sudo[2606]: pam_unix(sudo:session): session closed for user root Jan 13 21:19:04.673428 kubelet[2591]: E0113 21:19:04.673158 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:04.673428 kubelet[2591]: E0113 21:19:04.673257 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:04.673428 kubelet[2591]: E0113 21:19:04.673354 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:05.101003 kubelet[2591]: I0113 21:19:05.100865 2591 apiserver.go:52] "Watching apiserver" Jan 13 21:19:05.123420 kubelet[2591]: I0113 21:19:05.123373 2591 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:19:05.153229 kubelet[2591]: E0113 21:19:05.153194 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:05.153406 kubelet[2591]: E0113 21:19:05.153287 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:05.379874 kubelet[2591]: E0113 21:19:05.379708 2591 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:19:05.380630 kubelet[2591]: E0113 21:19:05.380374 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:05.455912 kubelet[2591]: I0113 21:19:05.455869 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.455795785 podStartE2EDuration="1.455795785s" podCreationTimestamp="2025-01-13 21:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:05.379578821 +0000 UTC m=+1.355395458" watchObservedRunningTime="2025-01-13 21:19:05.455795785 +0000 UTC m=+1.431612422" Jan 13 21:19:05.471345 kubelet[2591]: I0113 21:19:05.470790 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.470735281 podStartE2EDuration="1.470735281s" podCreationTimestamp="2025-01-13 21:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:05.455704258 +0000 UTC m=+1.431520895" watchObservedRunningTime="2025-01-13 21:19:05.470735281 +0000 UTC m=+1.446551918" Jan 13 21:19:05.481717 kubelet[2591]: I0113 21:19:05.481655 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.481588817 podStartE2EDuration="1.481588817s" podCreationTimestamp="2025-01-13 21:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:05.471188792 +0000 UTC m=+1.447005429" watchObservedRunningTime="2025-01-13 21:19:05.481588817 +0000 UTC m=+1.457405454" Jan 13 21:19:06.034938 sudo[1646]: pam_unix(sudo:session): session closed for user root Jan 13 21:19:06.037309 sshd[1643]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:06.042671 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:46886.service: Deactivated successfully. Jan 13 21:19:06.044955 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:19:06.045232 systemd[1]: session-7.scope: Consumed 4.383s CPU time, 193.9M memory peak, 0B memory swap peak. Jan 13 21:19:06.045782 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:19:06.046880 systemd-logind[1452]: Removed session 7. Jan 13 21:19:06.154015 kubelet[2591]: E0113 21:19:06.153981 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:11.613290 kubelet[2591]: E0113 21:19:11.613251 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:11.899521 kubelet[2591]: E0113 21:19:11.899313 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:12.163896 kubelet[2591]: E0113 21:19:12.163586 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:12.163896 kubelet[2591]: E0113 21:19:12.163766 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:13.337285 kubelet[2591]: E0113 21:19:13.337223 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:14.167438 kubelet[2591]: E0113 21:19:14.167392 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:16.889490 kubelet[2591]: I0113 21:19:16.889439 2591 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:19:16.890026 containerd[1463]: time="2025-01-13T21:19:16.889880914Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:19:16.890287 kubelet[2591]: I0113 21:19:16.890150 2591 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:19:17.245840 update_engine[1455]: I20250113 21:19:17.245640 1455 update_attempter.cc:509] Updating boot flags... Jan 13 21:19:17.275884 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2678) Jan 13 21:19:17.338368 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2678) Jan 13 21:19:17.373704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2678) Jan 13 21:19:17.903701 kubelet[2591]: I0113 21:19:17.903536 2591 topology_manager.go:215] "Topology Admit Handler" podUID="9d35d48d-5f9e-4461-8be6-e85d46ac29e4" podNamespace="kube-system" podName="kube-proxy-8pvnl" Jan 13 21:19:17.909364 kubelet[2591]: I0113 21:19:17.909272 2591 topology_manager.go:215] "Topology Admit Handler" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" podNamespace="kube-system" podName="cilium-zvlsb" Jan 13 21:19:17.913748 systemd[1]: Created slice kubepods-besteffort-pod9d35d48d_5f9e_4461_8be6_e85d46ac29e4.slice - libcontainer container kubepods-besteffort-pod9d35d48d_5f9e_4461_8be6_e85d46ac29e4.slice. Jan 13 21:19:17.925899 systemd[1]: Created slice kubepods-burstable-pode74a0430_f204_483b_978a_5818dee1b4ed.slice - libcontainer container kubepods-burstable-pode74a0430_f204_483b_978a_5818dee1b4ed.slice. Jan 13 21:19:17.998248 kubelet[2591]: I0113 21:19:17.998144 2591 topology_manager.go:215] "Topology Admit Handler" podUID="9242f17f-039a-43d8-ba7b-cff54d82d540" podNamespace="kube-system" podName="cilium-operator-5cc964979-fcwdf" Jan 13 21:19:18.010036 systemd[1]: Created slice kubepods-besteffort-pod9242f17f_039a_43d8_ba7b_cff54d82d540.slice - libcontainer container kubepods-besteffort-pod9242f17f_039a_43d8_ba7b_cff54d82d540.slice. Jan 13 21:19:18.102298 kubelet[2591]: I0113 21:19:18.102243 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-etc-cni-netd\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102298 kubelet[2591]: I0113 21:19:18.102290 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4kp\" (UniqueName: \"kubernetes.io/projected/9242f17f-039a-43d8-ba7b-cff54d82d540-kube-api-access-fs4kp\") pod \"cilium-operator-5cc964979-fcwdf\" (UID: \"9242f17f-039a-43d8-ba7b-cff54d82d540\") " pod="kube-system/cilium-operator-5cc964979-fcwdf" Jan 13 21:19:18.102298 kubelet[2591]: I0113 21:19:18.102312 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjjb8\" (UniqueName: \"kubernetes.io/projected/9d35d48d-5f9e-4461-8be6-e85d46ac29e4-kube-api-access-zjjb8\") pod \"kube-proxy-8pvnl\" (UID: \"9d35d48d-5f9e-4461-8be6-e85d46ac29e4\") " pod="kube-system/kube-proxy-8pvnl" Jan 13 21:19:18.102518 kubelet[2591]: I0113 21:19:18.102351 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-hostproc\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102518 kubelet[2591]: I0113 21:19:18.102371 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9242f17f-039a-43d8-ba7b-cff54d82d540-cilium-config-path\") pod \"cilium-operator-5cc964979-fcwdf\" (UID: \"9242f17f-039a-43d8-ba7b-cff54d82d540\") " pod="kube-system/cilium-operator-5cc964979-fcwdf" Jan 13 21:19:18.102518 kubelet[2591]: I0113 21:19:18.102389 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-hubble-tls\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102518 kubelet[2591]: I0113 21:19:18.102444 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7bs\" (UniqueName: \"kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-kube-api-access-7b7bs\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102616 kubelet[2591]: I0113 21:19:18.102517 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cni-path\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102616 kubelet[2591]: I0113 21:19:18.102577 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e74a0430-f204-483b-978a-5818dee1b4ed-clustermesh-secrets\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102616 kubelet[2591]: I0113 21:19:18.102598 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-net\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102691 kubelet[2591]: I0113 21:19:18.102620 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d35d48d-5f9e-4461-8be6-e85d46ac29e4-xtables-lock\") pod \"kube-proxy-8pvnl\" (UID: \"9d35d48d-5f9e-4461-8be6-e85d46ac29e4\") " pod="kube-system/kube-proxy-8pvnl" Jan 13 21:19:18.102691 kubelet[2591]: I0113 21:19:18.102668 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-config-path\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102735 kubelet[2591]: I0113 21:19:18.102703 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d35d48d-5f9e-4461-8be6-e85d46ac29e4-kube-proxy\") pod \"kube-proxy-8pvnl\" (UID: \"9d35d48d-5f9e-4461-8be6-e85d46ac29e4\") " pod="kube-system/kube-proxy-8pvnl" Jan 13 21:19:18.102735 kubelet[2591]: I0113 21:19:18.102734 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-run\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102797 kubelet[2591]: I0113 21:19:18.102757 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-lib-modules\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102825 kubelet[2591]: I0113 21:19:18.102800 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-xtables-lock\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102825 kubelet[2591]: I0113 21:19:18.102820 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-kernel\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102873 kubelet[2591]: I0113 21:19:18.102840 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d35d48d-5f9e-4461-8be6-e85d46ac29e4-lib-modules\") pod \"kube-proxy-8pvnl\" (UID: \"9d35d48d-5f9e-4461-8be6-e85d46ac29e4\") " pod="kube-system/kube-proxy-8pvnl" Jan 13 21:19:18.102873 kubelet[2591]: I0113 21:19:18.102861 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-bpf-maps\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.102925 kubelet[2591]: I0113 21:19:18.102878 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-cgroup\") pod \"cilium-zvlsb\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " pod="kube-system/cilium-zvlsb" Jan 13 21:19:18.229695 kubelet[2591]: E0113 21:19:18.228510 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:18.229810 containerd[1463]: time="2025-01-13T21:19:18.229002767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvlsb,Uid:e74a0430-f204-483b-978a-5818dee1b4ed,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:18.257554 containerd[1463]: time="2025-01-13T21:19:18.257433161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:18.257554 containerd[1463]: time="2025-01-13T21:19:18.257501881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:18.257554 containerd[1463]: time="2025-01-13T21:19:18.257512862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:18.257770 containerd[1463]: time="2025-01-13T21:19:18.257667124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:18.278472 systemd[1]: Started cri-containerd-5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097.scope - libcontainer container 5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097. Jan 13 21:19:18.302993 containerd[1463]: time="2025-01-13T21:19:18.302941172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvlsb,Uid:e74a0430-f204-483b-978a-5818dee1b4ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\"" Jan 13 21:19:18.303790 kubelet[2591]: E0113 21:19:18.303764 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:18.305229 containerd[1463]: time="2025-01-13T21:19:18.305201717Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:19:18.313190 kubelet[2591]: E0113 21:19:18.313135 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:18.313723 containerd[1463]: time="2025-01-13T21:19:18.313661692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fcwdf,Uid:9242f17f-039a-43d8-ba7b-cff54d82d540,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:18.339934 containerd[1463]: time="2025-01-13T21:19:18.339790203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:18.339934 containerd[1463]: time="2025-01-13T21:19:18.339862469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:18.339934 containerd[1463]: time="2025-01-13T21:19:18.339881346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:18.341143 containerd[1463]: time="2025-01-13T21:19:18.340405529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:18.360486 systemd[1]: Started cri-containerd-4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f.scope - libcontainer container 4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f. Jan 13 21:19:18.397526 containerd[1463]: time="2025-01-13T21:19:18.397473070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fcwdf,Uid:9242f17f-039a-43d8-ba7b-cff54d82d540,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f\"" Jan 13 21:19:18.398255 kubelet[2591]: E0113 21:19:18.398228 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:18.523578 kubelet[2591]: E0113 21:19:18.523396 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:18.524093 containerd[1463]: time="2025-01-13T21:19:18.524021547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pvnl,Uid:9d35d48d-5f9e-4461-8be6-e85d46ac29e4,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:18.548752 containerd[1463]: time="2025-01-13T21:19:18.548511464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:18.548752 containerd[1463]: time="2025-01-13T21:19:18.548584632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:18.548752 containerd[1463]: time="2025-01-13T21:19:18.548599691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:18.548752 containerd[1463]: time="2025-01-13T21:19:18.548698248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:18.586599 systemd[1]: Started cri-containerd-195354d0400e4a9ce2ae7141b32a6005a5c9e5a8f10512067adfafebca22f595.scope - libcontainer container 195354d0400e4a9ce2ae7141b32a6005a5c9e5a8f10512067adfafebca22f595. Jan 13 21:19:18.610892 containerd[1463]: time="2025-01-13T21:19:18.610851738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pvnl,Uid:9d35d48d-5f9e-4461-8be6-e85d46ac29e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"195354d0400e4a9ce2ae7141b32a6005a5c9e5a8f10512067adfafebca22f595\"" Jan 13 21:19:18.611568 kubelet[2591]: E0113 21:19:18.611544 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:18.617260 containerd[1463]: time="2025-01-13T21:19:18.617205891Z" level=info msg="CreateContainer within sandbox \"195354d0400e4a9ce2ae7141b32a6005a5c9e5a8f10512067adfafebca22f595\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:19:18.639955 containerd[1463]: time="2025-01-13T21:19:18.639898421Z" level=info msg="CreateContainer within sandbox \"195354d0400e4a9ce2ae7141b32a6005a5c9e5a8f10512067adfafebca22f595\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ec05dc9b9e5409ea8740a7d6aa60548b9b1fa68010326403e619640a034923e\"" Jan 13 21:19:18.640688 containerd[1463]: time="2025-01-13T21:19:18.640400342Z" level=info msg="StartContainer for \"7ec05dc9b9e5409ea8740a7d6aa60548b9b1fa68010326403e619640a034923e\"" Jan 13 21:19:18.666467 systemd[1]: Started cri-containerd-7ec05dc9b9e5409ea8740a7d6aa60548b9b1fa68010326403e619640a034923e.scope - libcontainer container 7ec05dc9b9e5409ea8740a7d6aa60548b9b1fa68010326403e619640a034923e. Jan 13 21:19:18.698089 containerd[1463]: time="2025-01-13T21:19:18.698031039Z" level=info msg="StartContainer for \"7ec05dc9b9e5409ea8740a7d6aa60548b9b1fa68010326403e619640a034923e\" returns successfully" Jan 13 21:19:19.177003 kubelet[2591]: E0113 21:19:19.176958 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:24.153417 kubelet[2591]: I0113 21:19:24.153114 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8pvnl" podStartSLOduration=7.15308056 podStartE2EDuration="7.15308056s" podCreationTimestamp="2025-01-13 21:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:19.236887595 +0000 UTC m=+15.212704232" watchObservedRunningTime="2025-01-13 21:19:24.15308056 +0000 UTC m=+20.128897197" Jan 13 21:19:27.946423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1624888790.mount: Deactivated successfully. Jan 13 21:19:31.235690 containerd[1463]: time="2025-01-13T21:19:31.235612566Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:31.236758 containerd[1463]: time="2025-01-13T21:19:31.236717628Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734735" Jan 13 21:19:31.238467 containerd[1463]: time="2025-01-13T21:19:31.238432298Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:31.240162 containerd[1463]: time="2025-01-13T21:19:31.240125268Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.934886872s" Jan 13 21:19:31.240162 containerd[1463]: time="2025-01-13T21:19:31.240158100Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:19:31.241808 containerd[1463]: time="2025-01-13T21:19:31.241737626Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:19:31.243694 containerd[1463]: time="2025-01-13T21:19:31.243645430Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:19:31.265740 containerd[1463]: time="2025-01-13T21:19:31.265685113Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\"" Jan 13 21:19:31.266291 containerd[1463]: time="2025-01-13T21:19:31.266234407Z" level=info msg="StartContainer for \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\"" Jan 13 21:19:31.296521 systemd[1]: Started cri-containerd-ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e.scope - libcontainer container ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e. Jan 13 21:19:31.329504 containerd[1463]: time="2025-01-13T21:19:31.329426890Z" level=info msg="StartContainer for \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\" returns successfully" Jan 13 21:19:31.345198 systemd[1]: cri-containerd-ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e.scope: Deactivated successfully. Jan 13 21:19:32.197664 kubelet[2591]: E0113 21:19:32.197617 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:32.238763 containerd[1463]: time="2025-01-13T21:19:32.238672391Z" level=info msg="shim disconnected" id=ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e namespace=k8s.io Jan 13 21:19:32.238763 containerd[1463]: time="2025-01-13T21:19:32.238733276Z" level=warning msg="cleaning up after shim disconnected" id=ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e namespace=k8s.io Jan 13 21:19:32.238763 containerd[1463]: time="2025-01-13T21:19:32.238743515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:32.257603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e-rootfs.mount: Deactivated successfully. Jan 13 21:19:33.156721 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:53924.service - OpenSSH per-connection server daemon (10.0.0.1:53924). Jan 13 21:19:33.200530 kubelet[2591]: E0113 21:19:33.200465 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:33.202508 containerd[1463]: time="2025-01-13T21:19:33.202472389Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:19:33.237898 sshd[3059]: Accepted publickey for core from 10.0.0.1 port 53924 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:33.240108 sshd[3059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:33.245362 systemd-logind[1452]: New session 8 of user core. Jan 13 21:19:33.254507 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:19:33.417599 sshd[3059]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:33.423527 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:53924.service: Deactivated successfully. Jan 13 21:19:33.425626 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:19:33.427848 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:19:33.428933 systemd-logind[1452]: Removed session 8. Jan 13 21:19:33.484965 containerd[1463]: time="2025-01-13T21:19:33.484889302Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\"" Jan 13 21:19:33.485592 containerd[1463]: time="2025-01-13T21:19:33.485412076Z" level=info msg="StartContainer for \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\"" Jan 13 21:19:33.516494 systemd[1]: Started cri-containerd-349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16.scope - libcontainer container 349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16. Jan 13 21:19:33.570863 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:19:33.571423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:19:33.571498 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:19:33.577884 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:19:33.578270 systemd[1]: cri-containerd-349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16.scope: Deactivated successfully. Jan 13 21:19:33.602034 containerd[1463]: time="2025-01-13T21:19:33.601965719Z" level=info msg="StartContainer for \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\" returns successfully" Jan 13 21:19:33.606579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:19:33.621597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16-rootfs.mount: Deactivated successfully. Jan 13 21:19:33.899811 containerd[1463]: time="2025-01-13T21:19:33.899725360Z" level=info msg="shim disconnected" id=349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16 namespace=k8s.io Jan 13 21:19:33.899811 containerd[1463]: time="2025-01-13T21:19:33.899802857Z" level=warning msg="cleaning up after shim disconnected" id=349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16 namespace=k8s.io Jan 13 21:19:33.899811 containerd[1463]: time="2025-01-13T21:19:33.899819147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:34.203715 kubelet[2591]: E0113 21:19:34.203560 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:34.207745 containerd[1463]: time="2025-01-13T21:19:34.207676865Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:19:34.243721 containerd[1463]: time="2025-01-13T21:19:34.243638522Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\"" Jan 13 21:19:34.244383 containerd[1463]: time="2025-01-13T21:19:34.244276082Z" level=info msg="StartContainer for \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\"" Jan 13 21:19:34.279570 systemd[1]: Started cri-containerd-8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3.scope - libcontainer container 8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3. Jan 13 21:19:34.318695 systemd[1]: cri-containerd-8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3.scope: Deactivated successfully. Jan 13 21:19:34.319859 containerd[1463]: time="2025-01-13T21:19:34.319807056Z" level=info msg="StartContainer for \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\" returns successfully" Jan 13 21:19:34.347210 containerd[1463]: time="2025-01-13T21:19:34.347131634Z" level=info msg="shim disconnected" id=8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3 namespace=k8s.io Jan 13 21:19:34.347210 containerd[1463]: time="2025-01-13T21:19:34.347195534Z" level=warning msg="cleaning up after shim disconnected" id=8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3 namespace=k8s.io Jan 13 21:19:34.347210 containerd[1463]: time="2025-01-13T21:19:34.347206244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:35.210739 kubelet[2591]: E0113 21:19:35.210678 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:35.213597 containerd[1463]: time="2025-01-13T21:19:35.213541742Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:19:35.344096 containerd[1463]: time="2025-01-13T21:19:35.344025675Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\"" Jan 13 21:19:35.344769 containerd[1463]: time="2025-01-13T21:19:35.344731063Z" level=info msg="StartContainer for \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\"" Jan 13 21:19:35.380614 systemd[1]: Started cri-containerd-b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b.scope - libcontainer container b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b. Jan 13 21:19:35.410373 systemd[1]: cri-containerd-b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b.scope: Deactivated successfully. Jan 13 21:19:35.412281 containerd[1463]: time="2025-01-13T21:19:35.412165405Z" level=info msg="StartContainer for \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\" returns successfully" Jan 13 21:19:35.433088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b-rootfs.mount: Deactivated successfully. Jan 13 21:19:35.436754 containerd[1463]: time="2025-01-13T21:19:35.436690121Z" level=info msg="shim disconnected" id=b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b namespace=k8s.io Jan 13 21:19:35.436955 containerd[1463]: time="2025-01-13T21:19:35.436753230Z" level=warning msg="cleaning up after shim disconnected" id=b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b namespace=k8s.io Jan 13 21:19:35.436955 containerd[1463]: time="2025-01-13T21:19:35.436768068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:36.215230 kubelet[2591]: E0113 21:19:36.215181 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:36.217594 containerd[1463]: time="2025-01-13T21:19:36.217544672Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:19:36.237787 containerd[1463]: time="2025-01-13T21:19:36.237730714Z" level=info msg="CreateContainer within sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\"" Jan 13 21:19:36.238443 containerd[1463]: time="2025-01-13T21:19:36.238305806Z" level=info msg="StartContainer for \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\"" Jan 13 21:19:36.278642 systemd[1]: Started cri-containerd-c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3.scope - libcontainer container c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3. Jan 13 21:19:36.313588 containerd[1463]: time="2025-01-13T21:19:36.313517225Z" level=info msg="StartContainer for \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\" returns successfully" Jan 13 21:19:36.451066 kubelet[2591]: I0113 21:19:36.450840 2591 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:19:36.472158 kubelet[2591]: I0113 21:19:36.471902 2591 topology_manager.go:215] "Topology Admit Handler" podUID="cdae5c64-edbb-4bdb-85f4-e10b27e2c2ff" podNamespace="kube-system" podName="coredns-76f75df574-kqsr9" Jan 13 21:19:36.472158 kubelet[2591]: I0113 21:19:36.472052 2591 topology_manager.go:215] "Topology Admit Handler" podUID="b1a57c62-1035-498a-8d52-2555c40c5381" podNamespace="kube-system" podName="coredns-76f75df574-zw6pw" Jan 13 21:19:36.482891 systemd[1]: Created slice kubepods-burstable-podb1a57c62_1035_498a_8d52_2555c40c5381.slice - libcontainer container kubepods-burstable-podb1a57c62_1035_498a_8d52_2555c40c5381.slice. Jan 13 21:19:36.493280 systemd[1]: Created slice kubepods-burstable-podcdae5c64_edbb_4bdb_85f4_e10b27e2c2ff.slice - libcontainer container kubepods-burstable-podcdae5c64_edbb_4bdb_85f4_e10b27e2c2ff.slice. Jan 13 21:19:36.615927 kubelet[2591]: I0113 21:19:36.615866 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfth\" (UniqueName: \"kubernetes.io/projected/cdae5c64-edbb-4bdb-85f4-e10b27e2c2ff-kube-api-access-ddfth\") pod \"coredns-76f75df574-kqsr9\" (UID: \"cdae5c64-edbb-4bdb-85f4-e10b27e2c2ff\") " pod="kube-system/coredns-76f75df574-kqsr9" Jan 13 21:19:36.615927 kubelet[2591]: I0113 21:19:36.615923 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdae5c64-edbb-4bdb-85f4-e10b27e2c2ff-config-volume\") pod \"coredns-76f75df574-kqsr9\" (UID: \"cdae5c64-edbb-4bdb-85f4-e10b27e2c2ff\") " pod="kube-system/coredns-76f75df574-kqsr9" Jan 13 21:19:36.615927 kubelet[2591]: I0113 21:19:36.615942 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a57c62-1035-498a-8d52-2555c40c5381-config-volume\") pod \"coredns-76f75df574-zw6pw\" (UID: \"b1a57c62-1035-498a-8d52-2555c40c5381\") " pod="kube-system/coredns-76f75df574-zw6pw" Jan 13 21:19:36.616157 kubelet[2591]: I0113 21:19:36.615974 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44kpw\" (UniqueName: \"kubernetes.io/projected/b1a57c62-1035-498a-8d52-2555c40c5381-kube-api-access-44kpw\") pod \"coredns-76f75df574-zw6pw\" (UID: \"b1a57c62-1035-498a-8d52-2555c40c5381\") " pod="kube-system/coredns-76f75df574-zw6pw" Jan 13 21:19:36.794354 kubelet[2591]: E0113 21:19:36.794224 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:36.795447 containerd[1463]: time="2025-01-13T21:19:36.795413831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zw6pw,Uid:b1a57c62-1035-498a-8d52-2555c40c5381,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:36.797208 kubelet[2591]: E0113 21:19:36.797185 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:36.797589 containerd[1463]: time="2025-01-13T21:19:36.797549900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kqsr9,Uid:cdae5c64-edbb-4bdb-85f4-e10b27e2c2ff,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:37.220110 kubelet[2591]: E0113 21:19:37.220066 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:38.221965 kubelet[2591]: E0113 21:19:38.221910 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:38.435430 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:40788.service - OpenSSH per-connection server daemon (10.0.0.1:40788). Jan 13 21:19:38.489050 sshd[3389]: Accepted publickey for core from 10.0.0.1 port 40788 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:38.490955 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:38.495748 systemd-logind[1452]: New session 9 of user core. Jan 13 21:19:38.504553 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:19:38.630500 sshd[3389]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:38.635267 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:40788.service: Deactivated successfully. Jan 13 21:19:38.637505 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:19:38.638282 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:19:38.639561 systemd-logind[1452]: Removed session 9. Jan 13 21:19:39.222782 kubelet[2591]: E0113 21:19:39.222739 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:41.694836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593474276.mount: Deactivated successfully. Jan 13 21:19:42.604804 containerd[1463]: time="2025-01-13T21:19:42.604703838Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:42.605885 containerd[1463]: time="2025-01-13T21:19:42.605768048Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Jan 13 21:19:42.607246 containerd[1463]: time="2025-01-13T21:19:42.607202404Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:42.609002 containerd[1463]: time="2025-01-13T21:19:42.608942585Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 11.367138303s" Jan 13 21:19:42.609065 containerd[1463]: time="2025-01-13T21:19:42.609004100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:19:42.611020 containerd[1463]: time="2025-01-13T21:19:42.610986907Z" level=info msg="CreateContainer within sandbox \"4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:19:42.641164 containerd[1463]: time="2025-01-13T21:19:42.640917835Z" level=info msg="CreateContainer within sandbox \"4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\"" Jan 13 21:19:42.643633 containerd[1463]: time="2025-01-13T21:19:42.643422713Z" level=info msg="StartContainer for \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\"" Jan 13 21:19:42.681696 systemd[1]: Started cri-containerd-6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e.scope - libcontainer container 6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e. Jan 13 21:19:42.718900 containerd[1463]: time="2025-01-13T21:19:42.718806675Z" level=info msg="StartContainer for \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\" returns successfully" Jan 13 21:19:43.231100 kubelet[2591]: E0113 21:19:43.231026 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:43.245102 kubelet[2591]: I0113 21:19:43.244541 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zvlsb" podStartSLOduration=13.308619031 podStartE2EDuration="26.244486596s" podCreationTimestamp="2025-01-13 21:19:17 +0000 UTC" firstStartedPulling="2025-01-13 21:19:18.304731986 +0000 UTC m=+14.280548624" lastFinishedPulling="2025-01-13 21:19:31.240599552 +0000 UTC m=+27.216416189" observedRunningTime="2025-01-13 21:19:37.397111451 +0000 UTC m=+33.372928118" watchObservedRunningTime="2025-01-13 21:19:43.244486596 +0000 UTC m=+39.220303233" Jan 13 21:19:43.458661 kubelet[2591]: E0113 21:19:43.458578 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:43.648972 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:40790.service - OpenSSH per-connection server daemon (10.0.0.1:40790). Jan 13 21:19:43.695930 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 40790 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:43.698204 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:43.704088 systemd-logind[1452]: New session 10 of user core. Jan 13 21:19:43.716688 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:19:43.848377 sshd[3461]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:43.853961 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:40790.service: Deactivated successfully. Jan 13 21:19:43.857675 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:19:43.858680 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:19:43.860403 systemd-logind[1452]: Removed session 10. Jan 13 21:19:44.232914 kubelet[2591]: E0113 21:19:44.232854 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:45.451241 systemd-networkd[1397]: cilium_host: Link UP Jan 13 21:19:45.452234 systemd-networkd[1397]: cilium_net: Link UP Jan 13 21:19:45.453309 systemd-networkd[1397]: cilium_net: Gained carrier Jan 13 21:19:45.453608 systemd-networkd[1397]: cilium_host: Gained carrier Jan 13 21:19:45.604242 systemd-networkd[1397]: cilium_vxlan: Link UP Jan 13 21:19:45.604261 systemd-networkd[1397]: cilium_vxlan: Gained carrier Jan 13 21:19:45.862382 kernel: NET: Registered PF_ALG protocol family Jan 13 21:19:45.879498 systemd-networkd[1397]: cilium_net: Gained IPv6LL Jan 13 21:19:46.134691 systemd-networkd[1397]: cilium_host: Gained IPv6LL Jan 13 21:19:46.647497 systemd-networkd[1397]: cilium_vxlan: Gained IPv6LL Jan 13 21:19:46.754464 systemd-networkd[1397]: lxc_health: Link UP Jan 13 21:19:46.765856 systemd-networkd[1397]: lxc_health: Gained carrier Jan 13 21:19:46.925934 systemd-networkd[1397]: lxc59db9c05e721: Link UP Jan 13 21:19:46.934380 kernel: eth0: renamed from tmpbb87a Jan 13 21:19:46.939006 systemd-networkd[1397]: lxc59db9c05e721: Gained carrier Jan 13 21:19:46.985886 systemd-networkd[1397]: lxce3b8116d5e98: Link UP Jan 13 21:19:46.991392 kernel: eth0: renamed from tmpac004 Jan 13 21:19:46.996928 systemd-networkd[1397]: lxce3b8116d5e98: Gained carrier Jan 13 21:19:48.186578 systemd-networkd[1397]: lxce3b8116d5e98: Gained IPv6LL Jan 13 21:19:48.230647 kubelet[2591]: E0113 21:19:48.230604 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:48.239940 kubelet[2591]: E0113 21:19:48.239884 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:48.306377 kubelet[2591]: I0113 21:19:48.306081 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-fcwdf" podStartSLOduration=7.095381914 podStartE2EDuration="31.306039483s" podCreationTimestamp="2025-01-13 21:19:17 +0000 UTC" firstStartedPulling="2025-01-13 21:19:18.39871311 +0000 UTC m=+14.374529747" lastFinishedPulling="2025-01-13 21:19:42.609370679 +0000 UTC m=+38.585187316" observedRunningTime="2025-01-13 21:19:43.245071555 +0000 UTC m=+39.220888212" watchObservedRunningTime="2025-01-13 21:19:48.306039483 +0000 UTC m=+44.281856120" Jan 13 21:19:48.439548 systemd-networkd[1397]: lxc59db9c05e721: Gained IPv6LL Jan 13 21:19:48.630567 systemd-networkd[1397]: lxc_health: Gained IPv6LL Jan 13 21:19:48.862419 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:54620.service - OpenSSH per-connection server daemon (10.0.0.1:54620). Jan 13 21:19:48.904223 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 54620 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:48.906982 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:48.913602 systemd-logind[1452]: New session 11 of user core. Jan 13 21:19:48.920658 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:19:49.057997 sshd[3856]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:49.063426 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:54620.service: Deactivated successfully. Jan 13 21:19:49.066723 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:19:49.067780 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:19:49.069266 systemd-logind[1452]: Removed session 11. Jan 13 21:19:51.468709 containerd[1463]: time="2025-01-13T21:19:51.468592104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:51.468709 containerd[1463]: time="2025-01-13T21:19:51.468672035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:51.468709 containerd[1463]: time="2025-01-13T21:19:51.468690259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:51.469203 containerd[1463]: time="2025-01-13T21:19:51.468800055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:51.495443 systemd[1]: Started cri-containerd-bb87a7a8acecd44171b0f9ce6f203e3d32fe8c36752312a2297ca4a2d8368761.scope - libcontainer container bb87a7a8acecd44171b0f9ce6f203e3d32fe8c36752312a2297ca4a2d8368761. Jan 13 21:19:51.506960 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:51.531273 containerd[1463]: time="2025-01-13T21:19:51.531225994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zw6pw,Uid:b1a57c62-1035-498a-8d52-2555c40c5381,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb87a7a8acecd44171b0f9ce6f203e3d32fe8c36752312a2297ca4a2d8368761\"" Jan 13 21:19:51.532044 kubelet[2591]: E0113 21:19:51.532021 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:51.534550 containerd[1463]: time="2025-01-13T21:19:51.534508397Z" level=info msg="CreateContainer within sandbox \"bb87a7a8acecd44171b0f9ce6f203e3d32fe8c36752312a2297ca4a2d8368761\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:19:51.583520 containerd[1463]: time="2025-01-13T21:19:51.582555577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:51.583520 containerd[1463]: time="2025-01-13T21:19:51.583293252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:51.583520 containerd[1463]: time="2025-01-13T21:19:51.583310404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:51.583520 containerd[1463]: time="2025-01-13T21:19:51.583434998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:51.618513 systemd[1]: Started cri-containerd-ac0049878110d9c8369a9da99540585f5a30f8f382f1d5759f4c2b87878fec81.scope - libcontainer container ac0049878110d9c8369a9da99540585f5a30f8f382f1d5759f4c2b87878fec81. Jan 13 21:19:51.634675 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:51.659481 containerd[1463]: time="2025-01-13T21:19:51.659432190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kqsr9,Uid:cdae5c64-edbb-4bdb-85f4-e10b27e2c2ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac0049878110d9c8369a9da99540585f5a30f8f382f1d5759f4c2b87878fec81\"" Jan 13 21:19:51.660335 kubelet[2591]: E0113 21:19:51.660296 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:51.662900 containerd[1463]: time="2025-01-13T21:19:51.662857652Z" level=info msg="CreateContainer within sandbox \"ac0049878110d9c8369a9da99540585f5a30f8f382f1d5759f4c2b87878fec81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:19:52.278429 containerd[1463]: time="2025-01-13T21:19:52.278311967Z" level=info msg="CreateContainer within sandbox \"bb87a7a8acecd44171b0f9ce6f203e3d32fe8c36752312a2297ca4a2d8368761\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d80a2092e4b6efddd4421e78da25f0871601fbebb4adc964285047e4577ae259\"" Jan 13 21:19:52.279112 containerd[1463]: time="2025-01-13T21:19:52.279048010Z" level=info msg="StartContainer for \"d80a2092e4b6efddd4421e78da25f0871601fbebb4adc964285047e4577ae259\"" Jan 13 21:19:52.312600 systemd[1]: Started cri-containerd-d80a2092e4b6efddd4421e78da25f0871601fbebb4adc964285047e4577ae259.scope - libcontainer container d80a2092e4b6efddd4421e78da25f0871601fbebb4adc964285047e4577ae259. Jan 13 21:19:52.334176 containerd[1463]: time="2025-01-13T21:19:52.334105004Z" level=info msg="CreateContainer within sandbox \"ac0049878110d9c8369a9da99540585f5a30f8f382f1d5759f4c2b87878fec81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73468d96efcf9d6e4c5e9c593f9d47e1e4082918e619104cd4c107e1a7254118\"" Jan 13 21:19:52.334962 containerd[1463]: time="2025-01-13T21:19:52.334928058Z" level=info msg="StartContainer for \"73468d96efcf9d6e4c5e9c593f9d47e1e4082918e619104cd4c107e1a7254118\"" Jan 13 21:19:52.342540 containerd[1463]: time="2025-01-13T21:19:52.342410367Z" level=info msg="StartContainer for \"d80a2092e4b6efddd4421e78da25f0871601fbebb4adc964285047e4577ae259\" returns successfully" Jan 13 21:19:52.366538 systemd[1]: Started cri-containerd-73468d96efcf9d6e4c5e9c593f9d47e1e4082918e619104cd4c107e1a7254118.scope - libcontainer container 73468d96efcf9d6e4c5e9c593f9d47e1e4082918e619104cd4c107e1a7254118. Jan 13 21:19:52.402443 containerd[1463]: time="2025-01-13T21:19:52.402388451Z" level=info msg="StartContainer for \"73468d96efcf9d6e4c5e9c593f9d47e1e4082918e619104cd4c107e1a7254118\" returns successfully" Jan 13 21:19:53.255308 kubelet[2591]: E0113 21:19:53.255058 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:53.257719 kubelet[2591]: E0113 21:19:53.257688 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:53.276662 kubelet[2591]: I0113 21:19:53.276591 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kqsr9" podStartSLOduration=36.275735652 podStartE2EDuration="36.275735652s" podCreationTimestamp="2025-01-13 21:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:53.265490005 +0000 UTC m=+49.241306642" watchObservedRunningTime="2025-01-13 21:19:53.275735652 +0000 UTC m=+49.251552289" Jan 13 21:19:53.276866 kubelet[2591]: I0113 21:19:53.276759 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zw6pw" podStartSLOduration=36.276713117 podStartE2EDuration="36.276713117s" podCreationTimestamp="2025-01-13 21:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:53.274833027 +0000 UTC m=+49.250649664" watchObservedRunningTime="2025-01-13 21:19:53.276713117 +0000 UTC m=+49.252529754" Jan 13 21:19:54.075830 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:54622.service - OpenSSH per-connection server daemon (10.0.0.1:54622). Jan 13 21:19:54.115595 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 54622 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:54.117418 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:54.121615 systemd-logind[1452]: New session 12 of user core. Jan 13 21:19:54.130492 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:19:54.241360 sshd[4046]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:54.245379 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:54622.service: Deactivated successfully. Jan 13 21:19:54.247613 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:19:54.248283 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:19:54.249230 systemd-logind[1452]: Removed session 12. Jan 13 21:19:54.258319 kubelet[2591]: E0113 21:19:54.258289 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:54.258831 kubelet[2591]: E0113 21:19:54.258396 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:55.259963 kubelet[2591]: E0113 21:19:55.259929 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:59.255476 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:58414.service - OpenSSH per-connection server daemon (10.0.0.1:58414). Jan 13 21:19:59.291218 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 58414 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:59.292829 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:59.297493 systemd-logind[1452]: New session 13 of user core. Jan 13 21:19:59.304451 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:19:59.425218 sshd[4061]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:59.446566 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:58414.service: Deactivated successfully. Jan 13 21:19:59.449789 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:19:59.451923 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:19:59.458876 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:58428.service - OpenSSH per-connection server daemon (10.0.0.1:58428). Jan 13 21:19:59.460188 systemd-logind[1452]: Removed session 13. Jan 13 21:19:59.496888 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 58428 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:59.499694 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:59.505305 systemd-logind[1452]: New session 14 of user core. Jan 13 21:19:59.516477 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:19:59.673756 sshd[4076]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:59.686555 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:58428.service: Deactivated successfully. Jan 13 21:19:59.690016 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:19:59.694603 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:19:59.708793 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:58440.service - OpenSSH per-connection server daemon (10.0.0.1:58440). Jan 13 21:19:59.709925 systemd-logind[1452]: Removed session 14. Jan 13 21:19:59.743686 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 58440 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:19:59.745879 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:59.750903 systemd-logind[1452]: New session 15 of user core. Jan 13 21:19:59.756635 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:19:59.872974 sshd[4088]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:59.877787 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:58440.service: Deactivated successfully. Jan 13 21:19:59.880396 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:19:59.881205 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:19:59.882742 systemd-logind[1452]: Removed session 15. Jan 13 21:20:04.889103 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:46116.service - OpenSSH per-connection server daemon (10.0.0.1:46116). Jan 13 21:20:04.931743 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 46116 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:04.934104 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:04.940286 systemd-logind[1452]: New session 16 of user core. Jan 13 21:20:04.947645 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:20:05.079791 sshd[4104]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:05.085098 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:46116.service: Deactivated successfully. Jan 13 21:20:05.087375 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:20:05.088449 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:20:05.089793 systemd-logind[1452]: Removed session 16. Jan 13 21:20:10.096687 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:46124.service - OpenSSH per-connection server daemon (10.0.0.1:46124). Jan 13 21:20:10.133234 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 46124 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:10.135476 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:10.140728 systemd-logind[1452]: New session 17 of user core. Jan 13 21:20:10.148488 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:20:10.257195 sshd[4118]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:10.261827 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:46124.service: Deactivated successfully. Jan 13 21:20:10.264183 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:20:10.264857 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:20:10.265903 systemd-logind[1452]: Removed session 17. Jan 13 21:20:15.270600 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). Jan 13 21:20:15.306463 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:15.308058 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:15.311826 systemd-logind[1452]: New session 18 of user core. Jan 13 21:20:15.322483 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:20:15.423598 sshd[4132]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:15.435407 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:39150.service: Deactivated successfully. Jan 13 21:20:15.437228 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:20:15.438930 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:20:15.444612 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:39158.service - OpenSSH per-connection server daemon (10.0.0.1:39158). Jan 13 21:20:15.445828 systemd-logind[1452]: Removed session 18. Jan 13 21:20:15.475703 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 39158 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:15.477247 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:15.481036 systemd-logind[1452]: New session 19 of user core. Jan 13 21:20:15.492453 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:20:15.740268 sshd[4146]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:15.753436 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:39158.service: Deactivated successfully. Jan 13 21:20:15.755562 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:20:15.757244 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:20:15.758696 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:39170.service - OpenSSH per-connection server daemon (10.0.0.1:39170). Jan 13 21:20:15.759524 systemd-logind[1452]: Removed session 19. Jan 13 21:20:15.799248 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 39170 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:15.801310 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:15.805656 systemd-logind[1452]: New session 20 of user core. Jan 13 21:20:15.816691 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:20:16.142213 kubelet[2591]: E0113 21:20:16.140769 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:17.215162 sshd[4159]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:17.224623 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:39170.service: Deactivated successfully. Jan 13 21:20:17.226789 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:20:17.230993 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:20:17.236874 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:39182.service - OpenSSH per-connection server daemon (10.0.0.1:39182). Jan 13 21:20:17.238891 systemd-logind[1452]: Removed session 20. Jan 13 21:20:17.270241 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 39182 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:17.272065 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:17.276095 systemd-logind[1452]: New session 21 of user core. Jan 13 21:20:17.289471 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:20:17.495583 sshd[4181]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:17.507521 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:39182.service: Deactivated successfully. Jan 13 21:20:17.509636 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:20:17.511466 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:20:17.519702 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:39192.service - OpenSSH per-connection server daemon (10.0.0.1:39192). Jan 13 21:20:17.520933 systemd-logind[1452]: Removed session 21. Jan 13 21:20:17.554821 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 39192 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:17.556858 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:17.561922 systemd-logind[1452]: New session 22 of user core. Jan 13 21:20:17.579631 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:20:17.706514 sshd[4193]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:17.711178 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:39192.service: Deactivated successfully. Jan 13 21:20:17.714256 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:20:17.715142 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:20:17.716210 systemd-logind[1452]: Removed session 22. Jan 13 21:20:22.719360 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:39206.service - OpenSSH per-connection server daemon (10.0.0.1:39206). Jan 13 21:20:22.760768 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 39206 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:22.762526 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:22.766702 systemd-logind[1452]: New session 23 of user core. Jan 13 21:20:22.775540 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:20:22.886445 sshd[4212]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:22.890903 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:39206.service: Deactivated successfully. Jan 13 21:20:22.893114 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:20:22.893745 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:20:22.894620 systemd-logind[1452]: Removed session 23. Jan 13 21:20:27.899245 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:40748.service - OpenSSH per-connection server daemon (10.0.0.1:40748). Jan 13 21:20:27.939700 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 40748 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:27.942722 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:27.947115 systemd-logind[1452]: New session 24 of user core. Jan 13 21:20:27.953476 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:20:28.063102 sshd[4227]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:28.067077 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:40748.service: Deactivated successfully. Jan 13 21:20:28.069273 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:20:28.070318 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:20:28.071406 systemd-logind[1452]: Removed session 24. Jan 13 21:20:32.140255 kubelet[2591]: E0113 21:20:32.140152 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:33.077629 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:40754.service - OpenSSH per-connection server daemon (10.0.0.1:40754). Jan 13 21:20:33.121840 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 40754 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:33.124382 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:33.129832 systemd-logind[1452]: New session 25 of user core. Jan 13 21:20:33.139602 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:20:33.269201 sshd[4242]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:33.275494 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:40754.service: Deactivated successfully. Jan 13 21:20:33.279760 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:20:33.281581 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:20:33.282929 systemd-logind[1452]: Removed session 25. Jan 13 21:20:34.140018 kubelet[2591]: E0113 21:20:34.139986 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:38.283173 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:58312.service - OpenSSH per-connection server daemon (10.0.0.1:58312). Jan 13 21:20:38.323049 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 58312 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:38.325163 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:38.331188 systemd-logind[1452]: New session 26 of user core. Jan 13 21:20:38.342480 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:20:38.481091 sshd[4256]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:38.492522 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:58312.service: Deactivated successfully. Jan 13 21:20:38.495106 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:20:38.498034 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:20:38.506874 systemd[1]: Started sshd@26-10.0.0.36:22-10.0.0.1:58322.service - OpenSSH per-connection server daemon (10.0.0.1:58322). Jan 13 21:20:38.508731 systemd-logind[1452]: Removed session 26. Jan 13 21:20:38.546956 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 58322 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:38.549079 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:38.555223 systemd-logind[1452]: New session 27 of user core. Jan 13 21:20:38.569626 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:20:39.991432 containerd[1463]: time="2025-01-13T21:20:39.991364967Z" level=info msg="StopContainer for \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\" with timeout 30 (s)" Jan 13 21:20:39.992933 containerd[1463]: time="2025-01-13T21:20:39.992885887Z" level=info msg="Stop container \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\" with signal terminated" Jan 13 21:20:40.018577 systemd[1]: cri-containerd-6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e.scope: Deactivated successfully. Jan 13 21:20:40.041428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e-rootfs.mount: Deactivated successfully. Jan 13 21:20:40.122929 containerd[1463]: time="2025-01-13T21:20:40.122880837Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:20:40.132857 containerd[1463]: time="2025-01-13T21:20:40.132834773Z" level=info msg="StopContainer for \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\" with timeout 2 (s)" Jan 13 21:20:40.133012 containerd[1463]: time="2025-01-13T21:20:40.132992262Z" level=info msg="Stop container \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\" with signal terminated" Jan 13 21:20:40.139955 kubelet[2591]: E0113 21:20:40.139900 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:40.139992 systemd-networkd[1397]: lxc_health: Link DOWN Jan 13 21:20:40.139996 systemd-networkd[1397]: lxc_health: Lost carrier Jan 13 21:20:40.144466 containerd[1463]: time="2025-01-13T21:20:40.142091465Z" level=info msg="shim disconnected" id=6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e namespace=k8s.io Jan 13 21:20:40.144466 containerd[1463]: time="2025-01-13T21:20:40.142150206Z" level=warning msg="cleaning up after shim disconnected" id=6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e namespace=k8s.io Jan 13 21:20:40.144466 containerd[1463]: time="2025-01-13T21:20:40.142158211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:40.159554 systemd[1]: cri-containerd-c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3.scope: Deactivated successfully. Jan 13 21:20:40.159854 systemd[1]: cri-containerd-c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3.scope: Consumed 8.225s CPU time. Jan 13 21:20:40.179636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3-rootfs.mount: Deactivated successfully. Jan 13 21:20:40.265222 containerd[1463]: time="2025-01-13T21:20:40.265134006Z" level=info msg="StopContainer for \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\" returns successfully" Jan 13 21:20:40.265752 containerd[1463]: time="2025-01-13T21:20:40.265727694Z" level=info msg="StopPodSandbox for \"4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f\"" Jan 13 21:20:40.265795 containerd[1463]: time="2025-01-13T21:20:40.265756539Z" level=info msg="Container to stop \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:20:40.267675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f-shm.mount: Deactivated successfully. Jan 13 21:20:40.277306 systemd[1]: cri-containerd-4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f.scope: Deactivated successfully. Jan 13 21:20:40.298927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f-rootfs.mount: Deactivated successfully. Jan 13 21:20:40.340296 containerd[1463]: time="2025-01-13T21:20:40.340193019Z" level=info msg="shim disconnected" id=c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3 namespace=k8s.io Jan 13 21:20:40.340296 containerd[1463]: time="2025-01-13T21:20:40.340262241Z" level=warning msg="cleaning up after shim disconnected" id=c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3 namespace=k8s.io Jan 13 21:20:40.340296 containerd[1463]: time="2025-01-13T21:20:40.340275215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:40.358826 containerd[1463]: time="2025-01-13T21:20:40.358747721Z" level=info msg="shim disconnected" id=4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f namespace=k8s.io Jan 13 21:20:40.358978 containerd[1463]: time="2025-01-13T21:20:40.358902976Z" level=warning msg="cleaning up after shim disconnected" id=4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f namespace=k8s.io Jan 13 21:20:40.358978 containerd[1463]: time="2025-01-13T21:20:40.358918395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:40.360397 containerd[1463]: time="2025-01-13T21:20:40.359652400Z" level=info msg="StopContainer for \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\" returns successfully" Jan 13 21:20:40.360397 containerd[1463]: time="2025-01-13T21:20:40.360105641Z" level=info msg="StopPodSandbox for \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\"" Jan 13 21:20:40.360397 containerd[1463]: time="2025-01-13T21:20:40.360136208Z" level=info msg="Container to stop \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:20:40.360397 containerd[1463]: time="2025-01-13T21:20:40.360148803Z" level=info msg="Container to stop \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:20:40.360397 containerd[1463]: time="2025-01-13T21:20:40.360160284Z" level=info msg="Container to stop \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:20:40.360397 containerd[1463]: time="2025-01-13T21:20:40.360173870Z" level=info msg="Container to stop \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:20:40.360397 containerd[1463]: time="2025-01-13T21:20:40.360185432Z" level=info msg="Container to stop \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:20:40.367626 systemd[1]: cri-containerd-5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097.scope: Deactivated successfully. Jan 13 21:20:40.386792 containerd[1463]: time="2025-01-13T21:20:40.386738936Z" level=info msg="TearDown network for sandbox \"4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f\" successfully" Jan 13 21:20:40.386792 containerd[1463]: time="2025-01-13T21:20:40.386782449Z" level=info msg="StopPodSandbox for \"4d56598b09eb4da534451e501cad44fa23d85acd6123758dcb200ca93345ad8f\" returns successfully" Jan 13 21:20:40.399201 containerd[1463]: time="2025-01-13T21:20:40.399071530Z" level=info msg="shim disconnected" id=5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097 namespace=k8s.io Jan 13 21:20:40.399201 containerd[1463]: time="2025-01-13T21:20:40.399124792Z" level=warning msg="cleaning up after shim disconnected" id=5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097 namespace=k8s.io Jan 13 21:20:40.399201 containerd[1463]: time="2025-01-13T21:20:40.399133187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:40.417536 containerd[1463]: time="2025-01-13T21:20:40.417454977Z" level=info msg="TearDown network for sandbox \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" successfully" Jan 13 21:20:40.417536 containerd[1463]: time="2025-01-13T21:20:40.417508659Z" level=info msg="StopPodSandbox for \"5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097\" returns successfully" Jan 13 21:20:40.502671 kubelet[2591]: I0113 21:20:40.502576 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs4kp\" (UniqueName: \"kubernetes.io/projected/9242f17f-039a-43d8-ba7b-cff54d82d540-kube-api-access-fs4kp\") pod \"9242f17f-039a-43d8-ba7b-cff54d82d540\" (UID: \"9242f17f-039a-43d8-ba7b-cff54d82d540\") " Jan 13 21:20:40.502671 kubelet[2591]: I0113 21:20:40.502665 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9242f17f-039a-43d8-ba7b-cff54d82d540-cilium-config-path\") pod \"9242f17f-039a-43d8-ba7b-cff54d82d540\" (UID: \"9242f17f-039a-43d8-ba7b-cff54d82d540\") " Jan 13 21:20:40.506715 kubelet[2591]: I0113 21:20:40.506635 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9242f17f-039a-43d8-ba7b-cff54d82d540-kube-api-access-fs4kp" (OuterVolumeSpecName: "kube-api-access-fs4kp") pod "9242f17f-039a-43d8-ba7b-cff54d82d540" (UID: "9242f17f-039a-43d8-ba7b-cff54d82d540"). InnerVolumeSpecName "kube-api-access-fs4kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:20:40.506910 kubelet[2591]: I0113 21:20:40.506887 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9242f17f-039a-43d8-ba7b-cff54d82d540-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9242f17f-039a-43d8-ba7b-cff54d82d540" (UID: "9242f17f-039a-43d8-ba7b-cff54d82d540"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:20:40.604352 kubelet[2591]: I0113 21:20:40.602843 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-hostproc\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604352 kubelet[2591]: I0113 21:20:40.603021 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-hubble-tls\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604352 kubelet[2591]: I0113 21:20:40.602942 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.604352 kubelet[2591]: I0113 21:20:40.603101 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e74a0430-f204-483b-978a-5818dee1b4ed-clustermesh-secrets\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604352 kubelet[2591]: I0113 21:20:40.603128 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-config-path\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604352 kubelet[2591]: I0113 21:20:40.603377 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-cgroup\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604628 kubelet[2591]: I0113 21:20:40.603405 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b7bs\" (UniqueName: \"kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-kube-api-access-7b7bs\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604628 kubelet[2591]: I0113 21:20:40.603426 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-run\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604628 kubelet[2591]: I0113 21:20:40.603447 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-xtables-lock\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604628 kubelet[2591]: I0113 21:20:40.603534 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.604628 kubelet[2591]: I0113 21:20:40.603608 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-lib-modules\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604628 kubelet[2591]: I0113 21:20:40.603634 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-kernel\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604776 kubelet[2591]: I0113 21:20:40.603666 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-bpf-maps\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604776 kubelet[2591]: I0113 21:20:40.603688 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-etc-cni-netd\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604776 kubelet[2591]: I0113 21:20:40.603708 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cni-path\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604776 kubelet[2591]: I0113 21:20:40.603730 2591 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-net\") pod \"e74a0430-f204-483b-978a-5818dee1b4ed\" (UID: \"e74a0430-f204-483b-978a-5818dee1b4ed\") " Jan 13 21:20:40.604776 kubelet[2591]: I0113 21:20:40.603766 2591 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.604776 kubelet[2591]: I0113 21:20:40.603782 2591 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.604776 kubelet[2591]: I0113 21:20:40.603797 2591 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9242f17f-039a-43d8-ba7b-cff54d82d540-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.604931 kubelet[2591]: I0113 21:20:40.603811 2591 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fs4kp\" (UniqueName: \"kubernetes.io/projected/9242f17f-039a-43d8-ba7b-cff54d82d540-kube-api-access-fs4kp\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.604931 kubelet[2591]: I0113 21:20:40.603838 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.612856 kubelet[2591]: I0113 21:20:40.611783 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:20:40.612856 kubelet[2591]: I0113 21:20:40.611868 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.612856 kubelet[2591]: I0113 21:20:40.611898 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.612856 kubelet[2591]: I0113 21:20:40.611920 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.612856 kubelet[2591]: I0113 21:20:40.611942 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.613084 kubelet[2591]: I0113 21:20:40.611964 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.613084 kubelet[2591]: I0113 21:20:40.611985 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.613084 kubelet[2591]: I0113 21:20:40.612007 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:20:40.613084 kubelet[2591]: I0113 21:20:40.612768 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74a0430-f204-483b-978a-5818dee1b4ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:20:40.619855 kubelet[2591]: I0113 21:20:40.619792 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-kube-api-access-7b7bs" (OuterVolumeSpecName: "kube-api-access-7b7bs") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "kube-api-access-7b7bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:20:40.619997 kubelet[2591]: I0113 21:20:40.619962 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e74a0430-f204-483b-978a-5818dee1b4ed" (UID: "e74a0430-f204-483b-978a-5818dee1b4ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:20:40.704779 kubelet[2591]: I0113 21:20:40.704721 2591 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.704779 kubelet[2591]: I0113 21:20:40.704766 2591 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.704779 kubelet[2591]: I0113 21:20:40.704782 2591 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.704779 kubelet[2591]: I0113 21:20:40.704792 2591 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.704779 kubelet[2591]: I0113 21:20:40.704802 2591 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.705185 kubelet[2591]: I0113 21:20:40.704811 2591 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e74a0430-f204-483b-978a-5818dee1b4ed-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.705185 kubelet[2591]: I0113 21:20:40.704820 2591 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.705185 kubelet[2591]: I0113 21:20:40.704830 2591 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7b7bs\" (UniqueName: \"kubernetes.io/projected/e74a0430-f204-483b-978a-5818dee1b4ed-kube-api-access-7b7bs\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.705185 kubelet[2591]: I0113 21:20:40.704839 2591 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.705185 kubelet[2591]: I0113 21:20:40.704847 2591 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.705185 kubelet[2591]: I0113 21:20:40.704857 2591 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:40.705185 kubelet[2591]: I0113 21:20:40.704865 2591 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e74a0430-f204-483b-978a-5818dee1b4ed-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 21:20:41.016040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097-rootfs.mount: Deactivated successfully. Jan 13 21:20:41.016165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cc98b0bc1e320f3ee1f8efbae5684a934a359e0039a3733e994346e430f5097-shm.mount: Deactivated successfully. Jan 13 21:20:41.016250 systemd[1]: var-lib-kubelet-pods-e74a0430\x2df204\x2d483b\x2d978a\x2d5818dee1b4ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7b7bs.mount: Deactivated successfully. Jan 13 21:20:41.016350 systemd[1]: var-lib-kubelet-pods-9242f17f\x2d039a\x2d43d8\x2dba7b\x2dcff54d82d540-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfs4kp.mount: Deactivated successfully. Jan 13 21:20:41.016431 systemd[1]: var-lib-kubelet-pods-e74a0430\x2df204\x2d483b\x2d978a\x2d5818dee1b4ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:20:41.016517 systemd[1]: var-lib-kubelet-pods-e74a0430\x2df204\x2d483b\x2d978a\x2d5818dee1b4ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:20:41.355769 kubelet[2591]: I0113 21:20:41.355617 2591 scope.go:117] "RemoveContainer" containerID="6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e" Jan 13 21:20:41.357180 containerd[1463]: time="2025-01-13T21:20:41.356827448Z" level=info msg="RemoveContainer for \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\"" Jan 13 21:20:41.362875 systemd[1]: Removed slice kubepods-besteffort-pod9242f17f_039a_43d8_ba7b_cff54d82d540.slice - libcontainer container kubepods-besteffort-pod9242f17f_039a_43d8_ba7b_cff54d82d540.slice. Jan 13 21:20:41.365237 systemd[1]: Removed slice kubepods-burstable-pode74a0430_f204_483b_978a_5818dee1b4ed.slice - libcontainer container kubepods-burstable-pode74a0430_f204_483b_978a_5818dee1b4ed.slice. Jan 13 21:20:41.365339 systemd[1]: kubepods-burstable-pode74a0430_f204_483b_978a_5818dee1b4ed.slice: Consumed 8.348s CPU time. Jan 13 21:20:41.433904 containerd[1463]: time="2025-01-13T21:20:41.433847122Z" level=info msg="RemoveContainer for \"6fc179b0adff0e2af014f51cc052f823e3e3a50ecade047cce4e2f9aae44156e\" returns successfully" Jan 13 21:20:41.434220 kubelet[2591]: I0113 21:20:41.434162 2591 scope.go:117] "RemoveContainer" containerID="c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3" Jan 13 21:20:41.435334 containerd[1463]: time="2025-01-13T21:20:41.435293939Z" level=info msg="RemoveContainer for \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\"" Jan 13 21:20:41.513544 containerd[1463]: time="2025-01-13T21:20:41.513486451Z" level=info msg="RemoveContainer for \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\" returns successfully" Jan 13 21:20:41.513841 kubelet[2591]: I0113 21:20:41.513801 2591 scope.go:117] "RemoveContainer" containerID="b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b" Jan 13 21:20:41.515060 containerd[1463]: time="2025-01-13T21:20:41.515014282Z" level=info msg="RemoveContainer for \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\"" Jan 13 21:20:41.580109 containerd[1463]: time="2025-01-13T21:20:41.580037608Z" level=info msg="RemoveContainer for \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\" returns successfully" Jan 13 21:20:41.580412 kubelet[2591]: I0113 21:20:41.580378 2591 scope.go:117] "RemoveContainer" containerID="8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3" Jan 13 21:20:41.582128 containerd[1463]: time="2025-01-13T21:20:41.581745892Z" level=info msg="RemoveContainer for \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\"" Jan 13 21:20:41.711742 containerd[1463]: time="2025-01-13T21:20:41.711591832Z" level=info msg="RemoveContainer for \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\" returns successfully" Jan 13 21:20:41.711935 kubelet[2591]: I0113 21:20:41.711888 2591 scope.go:117] "RemoveContainer" containerID="349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16" Jan 13 21:20:41.717723 containerd[1463]: time="2025-01-13T21:20:41.717676977Z" level=info msg="RemoveContainer for \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\"" Jan 13 21:20:41.818631 containerd[1463]: time="2025-01-13T21:20:41.818585722Z" level=info msg="RemoveContainer for \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\" returns successfully" Jan 13 21:20:41.818926 kubelet[2591]: I0113 21:20:41.818892 2591 scope.go:117] "RemoveContainer" containerID="ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e" Jan 13 21:20:41.819966 containerd[1463]: time="2025-01-13T21:20:41.819932760Z" level=info msg="RemoveContainer for \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\"" Jan 13 21:20:41.942268 containerd[1463]: time="2025-01-13T21:20:41.942206683Z" level=info msg="RemoveContainer for \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\" returns successfully" Jan 13 21:20:41.942539 kubelet[2591]: I0113 21:20:41.942484 2591 scope.go:117] "RemoveContainer" containerID="c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3" Jan 13 21:20:41.946481 containerd[1463]: time="2025-01-13T21:20:41.946442506Z" level=error msg="ContainerStatus for \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\": not found" Jan 13 21:20:41.946686 kubelet[2591]: E0113 21:20:41.946639 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\": not found" containerID="c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3" Jan 13 21:20:41.946778 kubelet[2591]: I0113 21:20:41.946762 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3"} err="failed to get container status \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c48644989d1e9c7ba32b5d1ff692f31192febbc298f4319abff55e5a040b93a3\": not found" Jan 13 21:20:41.946804 kubelet[2591]: I0113 21:20:41.946782 2591 scope.go:117] "RemoveContainer" containerID="b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b" Jan 13 21:20:41.947006 containerd[1463]: time="2025-01-13T21:20:41.946970238Z" level=error msg="ContainerStatus for \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\": not found" Jan 13 21:20:41.947129 kubelet[2591]: E0113 21:20:41.947110 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\": not found" containerID="b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b" Jan 13 21:20:41.947169 kubelet[2591]: I0113 21:20:41.947137 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b"} err="failed to get container status \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8ba968370f621b24e395f7cff976075649112479e37f887dd2c3a2a7d64ad1b\": not found" Jan 13 21:20:41.947169 kubelet[2591]: I0113 21:20:41.947148 2591 scope.go:117] "RemoveContainer" containerID="8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3" Jan 13 21:20:41.947411 containerd[1463]: time="2025-01-13T21:20:41.947370709Z" level=error msg="ContainerStatus for \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\": not found" Jan 13 21:20:41.947577 kubelet[2591]: E0113 21:20:41.947542 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\": not found" containerID="8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3" Jan 13 21:20:41.947625 kubelet[2591]: I0113 21:20:41.947597 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3"} err="failed to get container status \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e469f45b0f5b3aa6f2d37bceb4bc9833c9fa9d956c2bdd2e48014e1d9f1c6f3\": not found" Jan 13 21:20:41.947625 kubelet[2591]: I0113 21:20:41.947611 2591 scope.go:117] "RemoveContainer" containerID="349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16" Jan 13 21:20:41.947859 containerd[1463]: time="2025-01-13T21:20:41.947819981Z" level=error msg="ContainerStatus for \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\": not found" Jan 13 21:20:41.947993 kubelet[2591]: E0113 21:20:41.947976 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\": not found" containerID="349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16" Jan 13 21:20:41.948045 kubelet[2591]: I0113 21:20:41.948000 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16"} err="failed to get container status \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\": rpc error: code = NotFound desc = an error occurred when try to find container \"349f0010e56ad0a0af0fecc1d5e46fe70d5eea7cf46f3fa1d330a321f8936b16\": not found" Jan 13 21:20:41.948045 kubelet[2591]: I0113 21:20:41.948009 2591 scope.go:117] "RemoveContainer" containerID="ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e" Jan 13 21:20:41.948189 containerd[1463]: time="2025-01-13T21:20:41.948156220Z" level=error msg="ContainerStatus for \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\": not found" Jan 13 21:20:41.948333 kubelet[2591]: E0113 21:20:41.948300 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\": not found" containerID="ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e" Jan 13 21:20:41.948384 kubelet[2591]: I0113 21:20:41.948356 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e"} err="failed to get container status \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec2294df35347f643f9b5eeb0338a124ecd7b2c717cc72ea907f7f514fa0eb4e\": not found" Jan 13 21:20:41.979163 sshd[4270]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:41.987726 systemd[1]: sshd@26-10.0.0.36:22-10.0.0.1:58322.service: Deactivated successfully. Jan 13 21:20:41.990079 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:20:41.992154 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:20:41.998701 systemd[1]: Started sshd@27-10.0.0.36:22-10.0.0.1:58330.service - OpenSSH per-connection server daemon (10.0.0.1:58330). Jan 13 21:20:41.999741 systemd-logind[1452]: Removed session 27. Jan 13 21:20:42.034946 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 58330 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:42.036816 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:42.041224 systemd-logind[1452]: New session 28 of user core. Jan 13 21:20:42.051459 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:20:42.142148 kubelet[2591]: I0113 21:20:42.142109 2591 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9242f17f-039a-43d8-ba7b-cff54d82d540" path="/var/lib/kubelet/pods/9242f17f-039a-43d8-ba7b-cff54d82d540/volumes" Jan 13 21:20:42.142968 kubelet[2591]: I0113 21:20:42.142951 2591 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" path="/var/lib/kubelet/pods/e74a0430-f204-483b-978a-5818dee1b4ed/volumes" Jan 13 21:20:42.833404 sshd[4431]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:42.842964 systemd[1]: sshd@27-10.0.0.36:22-10.0.0.1:58330.service: Deactivated successfully. Jan 13 21:20:42.846124 kubelet[2591]: I0113 21:20:42.846075 2591 topology_manager.go:215] "Topology Admit Handler" podUID="5acf0394-e5d1-4679-b519-5d9612d3f957" podNamespace="kube-system" podName="cilium-tbtwd" Jan 13 21:20:42.849571 kubelet[2591]: E0113 21:20:42.846149 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" containerName="mount-cgroup" Jan 13 21:20:42.849571 kubelet[2591]: E0113 21:20:42.846162 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" containerName="clean-cilium-state" Jan 13 21:20:42.849571 kubelet[2591]: E0113 21:20:42.846171 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" containerName="cilium-agent" Jan 13 21:20:42.849571 kubelet[2591]: E0113 21:20:42.846180 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9242f17f-039a-43d8-ba7b-cff54d82d540" containerName="cilium-operator" Jan 13 21:20:42.849571 kubelet[2591]: E0113 21:20:42.846190 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" containerName="apply-sysctl-overwrites" Jan 13 21:20:42.849571 kubelet[2591]: E0113 21:20:42.846198 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" containerName="mount-bpf-fs" Jan 13 21:20:42.849571 kubelet[2591]: I0113 21:20:42.846223 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74a0430-f204-483b-978a-5818dee1b4ed" containerName="cilium-agent" Jan 13 21:20:42.849571 kubelet[2591]: I0113 21:20:42.846233 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="9242f17f-039a-43d8-ba7b-cff54d82d540" containerName="cilium-operator" Jan 13 21:20:42.846199 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:20:42.850513 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:20:42.860000 systemd[1]: Started sshd@28-10.0.0.36:22-10.0.0.1:58338.service - OpenSSH per-connection server daemon (10.0.0.1:58338). Jan 13 21:20:42.868460 systemd-logind[1452]: Removed session 28. Jan 13 21:20:42.874403 systemd[1]: Created slice kubepods-burstable-pod5acf0394_e5d1_4679_b519_5d9612d3f957.slice - libcontainer container kubepods-burstable-pod5acf0394_e5d1_4679_b519_5d9612d3f957.slice. Jan 13 21:20:42.899417 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 58338 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:42.901188 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:42.906432 systemd-logind[1452]: New session 29 of user core. Jan 13 21:20:42.916489 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:20:42.967838 sshd[4445]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:42.982301 systemd[1]: sshd@28-10.0.0.36:22-10.0.0.1:58338.service: Deactivated successfully. Jan 13 21:20:42.984505 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:20:42.986339 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:20:42.999738 systemd[1]: Started sshd@29-10.0.0.36:22-10.0.0.1:58342.service - OpenSSH per-connection server daemon (10.0.0.1:58342). Jan 13 21:20:43.000875 systemd-logind[1452]: Removed session 29. Jan 13 21:20:43.017636 kubelet[2591]: I0113 21:20:43.017608 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-cilium-cgroup\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.017636 kubelet[2591]: I0113 21:20:43.017673 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-cilium-run\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.017636 kubelet[2591]: I0113 21:20:43.017748 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-bpf-maps\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.017636 kubelet[2591]: I0113 21:20:43.017777 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-host-proc-sys-kernel\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.017636 kubelet[2591]: I0113 21:20:43.017796 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-etc-cni-netd\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.017636 kubelet[2591]: I0113 21:20:43.017813 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5acf0394-e5d1-4679-b519-5d9612d3f957-clustermesh-secrets\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018065 kubelet[2591]: I0113 21:20:43.017832 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-xtables-lock\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018065 kubelet[2591]: I0113 21:20:43.017849 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-host-proc-sys-net\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018065 kubelet[2591]: I0113 21:20:43.017936 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbsdv\" (UniqueName: \"kubernetes.io/projected/5acf0394-e5d1-4679-b519-5d9612d3f957-kube-api-access-rbsdv\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018065 kubelet[2591]: I0113 21:20:43.017984 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-hostproc\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018065 kubelet[2591]: I0113 21:20:43.018059 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-cni-path\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018180 kubelet[2591]: I0113 21:20:43.018084 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5acf0394-e5d1-4679-b519-5d9612d3f957-lib-modules\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018180 kubelet[2591]: I0113 21:20:43.018107 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5acf0394-e5d1-4679-b519-5d9612d3f957-cilium-config-path\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018180 kubelet[2591]: I0113 21:20:43.018135 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5acf0394-e5d1-4679-b519-5d9612d3f957-hubble-tls\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.018243 kubelet[2591]: I0113 21:20:43.018184 2591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5acf0394-e5d1-4679-b519-5d9612d3f957-cilium-ipsec-secrets\") pod \"cilium-tbtwd\" (UID: \"5acf0394-e5d1-4679-b519-5d9612d3f957\") " pod="kube-system/cilium-tbtwd" Jan 13 21:20:43.029968 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:20:43.031691 sshd[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:20:43.036281 systemd-logind[1452]: New session 30 of user core. Jan 13 21:20:43.045479 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:20:43.180247 kubelet[2591]: E0113 21:20:43.180181 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:43.180901 containerd[1463]: time="2025-01-13T21:20:43.180850231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbtwd,Uid:5acf0394-e5d1-4679-b519-5d9612d3f957,Namespace:kube-system,Attempt:0,}" Jan 13 21:20:43.203400 containerd[1463]: time="2025-01-13T21:20:43.203109854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:20:43.203400 containerd[1463]: time="2025-01-13T21:20:43.203178795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:20:43.203400 containerd[1463]: time="2025-01-13T21:20:43.203191188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:20:43.203400 containerd[1463]: time="2025-01-13T21:20:43.203282111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:20:43.222515 systemd[1]: Started cri-containerd-47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227.scope - libcontainer container 47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227. Jan 13 21:20:43.245904 containerd[1463]: time="2025-01-13T21:20:43.245838832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbtwd,Uid:5acf0394-e5d1-4679-b519-5d9612d3f957,Namespace:kube-system,Attempt:0,} returns sandbox id \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\"" Jan 13 21:20:43.247146 kubelet[2591]: E0113 21:20:43.247109 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:43.249567 containerd[1463]: time="2025-01-13T21:20:43.249442271Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:20:43.266118 containerd[1463]: time="2025-01-13T21:20:43.266046862Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745\"" Jan 13 21:20:43.266519 containerd[1463]: time="2025-01-13T21:20:43.266497527Z" level=info msg="StartContainer for \"3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745\"" Jan 13 21:20:43.296492 systemd[1]: Started cri-containerd-3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745.scope - libcontainer container 3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745. Jan 13 21:20:43.327079 containerd[1463]: time="2025-01-13T21:20:43.327021024Z" level=info msg="StartContainer for \"3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745\" returns successfully" Jan 13 21:20:43.338698 systemd[1]: cri-containerd-3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745.scope: Deactivated successfully. Jan 13 21:20:43.367581 kubelet[2591]: E0113 21:20:43.367554 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:43.376473 containerd[1463]: time="2025-01-13T21:20:43.376394192Z" level=info msg="shim disconnected" id=3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745 namespace=k8s.io Jan 13 21:20:43.376741 containerd[1463]: time="2025-01-13T21:20:43.376695544Z" level=warning msg="cleaning up after shim disconnected" id=3160af922d160efecaa6b6ddbe460b45444a1e4b4784987d8d16afc934cfb745 namespace=k8s.io Jan 13 21:20:43.376741 containerd[1463]: time="2025-01-13T21:20:43.376718076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:44.217606 kubelet[2591]: E0113 21:20:44.217569 2591 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:20:44.370069 kubelet[2591]: E0113 21:20:44.370037 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:44.372008 containerd[1463]: time="2025-01-13T21:20:44.371950321Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:20:44.387805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042466071.mount: Deactivated successfully. Jan 13 21:20:44.389128 containerd[1463]: time="2025-01-13T21:20:44.389095931Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39\"" Jan 13 21:20:44.390379 containerd[1463]: time="2025-01-13T21:20:44.389571543Z" level=info msg="StartContainer for \"a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39\"" Jan 13 21:20:44.418453 systemd[1]: Started cri-containerd-a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39.scope - libcontainer container a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39. Jan 13 21:20:44.445980 containerd[1463]: time="2025-01-13T21:20:44.445924497Z" level=info msg="StartContainer for \"a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39\" returns successfully" Jan 13 21:20:44.455780 systemd[1]: cri-containerd-a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39.scope: Deactivated successfully. Jan 13 21:20:44.479974 containerd[1463]: time="2025-01-13T21:20:44.479817980Z" level=info msg="shim disconnected" id=a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39 namespace=k8s.io Jan 13 21:20:44.479974 containerd[1463]: time="2025-01-13T21:20:44.479886501Z" level=warning msg="cleaning up after shim disconnected" id=a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39 namespace=k8s.io Jan 13 21:20:44.479974 containerd[1463]: time="2025-01-13T21:20:44.479899225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:45.125115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a56737aa56a8cb4974c46eb902a81efe97cc74e042c5dfcf6755786658876c39-rootfs.mount: Deactivated successfully. Jan 13 21:20:45.373411 kubelet[2591]: E0113 21:20:45.373360 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:45.375109 containerd[1463]: time="2025-01-13T21:20:45.374982375Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:20:45.398203 containerd[1463]: time="2025-01-13T21:20:45.398073804Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270\"" Jan 13 21:20:45.398667 containerd[1463]: time="2025-01-13T21:20:45.398612767Z" level=info msg="StartContainer for \"5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270\"" Jan 13 21:20:45.430477 systemd[1]: Started cri-containerd-5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270.scope - libcontainer container 5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270. Jan 13 21:20:45.460196 containerd[1463]: time="2025-01-13T21:20:45.460150908Z" level=info msg="StartContainer for \"5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270\" returns successfully" Jan 13 21:20:45.461865 systemd[1]: cri-containerd-5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270.scope: Deactivated successfully. Jan 13 21:20:45.488473 containerd[1463]: time="2025-01-13T21:20:45.488390881Z" level=info msg="shim disconnected" id=5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270 namespace=k8s.io Jan 13 21:20:45.488473 containerd[1463]: time="2025-01-13T21:20:45.488451477Z" level=warning msg="cleaning up after shim disconnected" id=5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270 namespace=k8s.io Jan 13 21:20:45.488473 containerd[1463]: time="2025-01-13T21:20:45.488462979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:46.125203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e14d8dbd43551b0f3db1af7352abcbb671d38451caa6b44f0d113ce96214270-rootfs.mount: Deactivated successfully. Jan 13 21:20:46.377247 kubelet[2591]: E0113 21:20:46.377099 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:46.379039 containerd[1463]: time="2025-01-13T21:20:46.378984281Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:20:46.450856 kubelet[2591]: I0113 21:20:46.450807 2591 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:20:46Z","lastTransitionTime":"2025-01-13T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:20:47.105046 containerd[1463]: time="2025-01-13T21:20:47.104984098Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440\"" Jan 13 21:20:47.105785 containerd[1463]: time="2025-01-13T21:20:47.105749889Z" level=info msg="StartContainer for \"01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440\"" Jan 13 21:20:47.141469 systemd[1]: Started cri-containerd-01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440.scope - libcontainer container 01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440. Jan 13 21:20:47.164185 systemd[1]: cri-containerd-01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440.scope: Deactivated successfully. Jan 13 21:20:47.316787 containerd[1463]: time="2025-01-13T21:20:47.316728963Z" level=info msg="StartContainer for \"01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440\" returns successfully" Jan 13 21:20:47.335654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440-rootfs.mount: Deactivated successfully. Jan 13 21:20:47.381207 kubelet[2591]: E0113 21:20:47.381073 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:47.642267 containerd[1463]: time="2025-01-13T21:20:47.642085253Z" level=info msg="shim disconnected" id=01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440 namespace=k8s.io Jan 13 21:20:47.642267 containerd[1463]: time="2025-01-13T21:20:47.642142781Z" level=warning msg="cleaning up after shim disconnected" id=01640fe5812c3ac51ee025b0ffb1612044b9a509700b0d1a9a739293b391a440 namespace=k8s.io Jan 13 21:20:47.642267 containerd[1463]: time="2025-01-13T21:20:47.642152880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:20:48.385235 kubelet[2591]: E0113 21:20:48.384538 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:48.386504 containerd[1463]: time="2025-01-13T21:20:48.386472137Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:20:48.456239 containerd[1463]: time="2025-01-13T21:20:48.456184326Z" level=info msg="CreateContainer within sandbox \"47ec6fa9660b32cc8623e893ab8ffad6d4e453b8fecefea3f26117c8276da227\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4d3a5b62b90c318e080af9759cadf689a5faf30871dd1acbf360e0211baa16a\"" Jan 13 21:20:48.456814 containerd[1463]: time="2025-01-13T21:20:48.456732315Z" level=info msg="StartContainer for \"a4d3a5b62b90c318e080af9759cadf689a5faf30871dd1acbf360e0211baa16a\"" Jan 13 21:20:48.492513 systemd[1]: Started cri-containerd-a4d3a5b62b90c318e080af9759cadf689a5faf30871dd1acbf360e0211baa16a.scope - libcontainer container a4d3a5b62b90c318e080af9759cadf689a5faf30871dd1acbf360e0211baa16a. Jan 13 21:20:48.525319 containerd[1463]: time="2025-01-13T21:20:48.525260790Z" level=info msg="StartContainer for \"a4d3a5b62b90c318e080af9759cadf689a5faf30871dd1acbf360e0211baa16a\" returns successfully" Jan 13 21:20:48.967366 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:20:49.389395 kubelet[2591]: E0113 21:20:49.389360 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:49.525031 kubelet[2591]: I0113 21:20:49.524967 2591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tbtwd" podStartSLOduration=7.524916753 podStartE2EDuration="7.524916753s" podCreationTimestamp="2025-01-13 21:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:20:49.524883239 +0000 UTC m=+105.500699876" watchObservedRunningTime="2025-01-13 21:20:49.524916753 +0000 UTC m=+105.500733390" Jan 13 21:20:51.181361 kubelet[2591]: E0113 21:20:51.181309 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:52.080509 systemd-networkd[1397]: lxc_health: Link UP Jan 13 21:20:52.089061 systemd-networkd[1397]: lxc_health: Gained carrier Jan 13 21:20:52.123551 systemd[1]: run-containerd-runc-k8s.io-a4d3a5b62b90c318e080af9759cadf689a5faf30871dd1acbf360e0211baa16a-runc.ILXnTE.mount: Deactivated successfully. Jan 13 21:20:53.182465 kubelet[2591]: E0113 21:20:53.182413 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:53.397640 kubelet[2591]: E0113 21:20:53.397591 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:53.399530 systemd-networkd[1397]: lxc_health: Gained IPv6LL Jan 13 21:20:54.399755 kubelet[2591]: E0113 21:20:54.399711 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:58.139630 kubelet[2591]: E0113 21:20:58.139591 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:20:58.449158 systemd[1]: run-containerd-runc-k8s.io-a4d3a5b62b90c318e080af9759cadf689a5faf30871dd1acbf360e0211baa16a-runc.jPNBhN.mount: Deactivated successfully. Jan 13 21:20:58.502108 sshd[4456]: pam_unix(sshd:session): session closed for user core Jan 13 21:20:58.506597 systemd[1]: sshd@29-10.0.0.36:22-10.0.0.1:58342.service: Deactivated successfully. Jan 13 21:20:58.508926 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:20:58.509719 systemd-logind[1452]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:20:58.510683 systemd-logind[1452]: Removed session 30.