Jan 29 12:01:50.872381 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:01:50.872402 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:50.872413 kernel: BIOS-provided physical RAM map: Jan 29 12:01:50.872419 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:01:50.872425 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:01:50.872431 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:01:50.872438 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 12:01:50.872445 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 12:01:50.872451 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 12:01:50.872459 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 12:01:50.872465 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:01:50.872471 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:01:50.872477 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:01:50.872483 kernel: NX (Execute Disable) protection: active Jan 29 12:01:50.872491 kernel: APIC: Static calls initialized Jan 29 12:01:50.872500 kernel: SMBIOS 2.8 present. Jan 29 12:01:50.872507 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 12:01:50.872513 kernel: Hypervisor detected: KVM Jan 29 12:01:50.872520 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:01:50.872527 kernel: kvm-clock: using sched offset of 2242601149 cycles Jan 29 12:01:50.872533 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:01:50.872541 kernel: tsc: Detected 2794.750 MHz processor Jan 29 12:01:50.872548 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:01:50.872555 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:01:50.872562 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 12:01:50.872571 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:01:50.872578 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:01:50.872584 kernel: Using GB pages for direct mapping Jan 29 12:01:50.872591 kernel: ACPI: Early table checksum verification disabled Jan 29 12:01:50.872598 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 12:01:50.872605 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:01:50.872612 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:01:50.872619 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:01:50.872627 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 12:01:50.872634 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:01:50.872641 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:01:50.872648 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:01:50.872654 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:01:50.872661 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 12:01:50.872668 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 12:01:50.872678 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 12:01:50.872687 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 12:01:50.872695 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 12:01:50.872702 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 12:01:50.872709 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 12:01:50.872716 kernel: No NUMA configuration found Jan 29 12:01:50.872723 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 12:01:50.872732 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 12:01:50.872739 kernel: Zone ranges: Jan 29 12:01:50.872746 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:01:50.872753 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 12:01:50.872760 kernel: Normal empty Jan 29 12:01:50.872767 kernel: Movable zone start for each node Jan 29 12:01:50.872774 kernel: Early memory node ranges Jan 29 12:01:50.872781 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:01:50.872788 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 12:01:50.872795 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 12:01:50.872816 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:01:50.872823 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:01:50.872830 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 12:01:50.872837 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:01:50.872844 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:01:50.872851 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:01:50.872858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:01:50.872865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:01:50.872872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:01:50.872881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:01:50.872888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:01:50.872895 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:01:50.872902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:01:50.872909 kernel: TSC deadline timer available Jan 29 12:01:50.872916 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 12:01:50.872923 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:01:50.872930 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 12:01:50.872937 kernel: kvm-guest: setup PV sched yield Jan 29 12:01:50.872944 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 12:01:50.872953 kernel: Booting paravirtualized kernel on KVM Jan 29 12:01:50.872961 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:01:50.872968 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 12:01:50.872975 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 12:01:50.872982 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 12:01:50.872989 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 12:01:50.872996 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:01:50.873003 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:01:50.873011 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:50.873021 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:01:50.873028 kernel: random: crng init done Jan 29 12:01:50.873035 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:01:50.873042 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:01:50.873049 kernel: Fallback order for Node 0: 0 Jan 29 12:01:50.873056 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 12:01:50.873063 kernel: Policy zone: DMA32 Jan 29 12:01:50.873070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:01:50.873079 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 29 12:01:50.873086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 12:01:50.873100 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:01:50.873107 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:01:50.873114 kernel: Dynamic Preempt: voluntary Jan 29 12:01:50.873121 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:01:50.873128 kernel: rcu: RCU event tracing is enabled. Jan 29 12:01:50.873136 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 12:01:50.873143 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:01:50.873152 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:01:50.873159 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:01:50.873167 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:01:50.873173 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 12:01:50.873181 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 12:01:50.873188 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:01:50.873194 kernel: Console: colour VGA+ 80x25 Jan 29 12:01:50.873201 kernel: printk: console [ttyS0] enabled Jan 29 12:01:50.873208 kernel: ACPI: Core revision 20230628 Jan 29 12:01:50.873218 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 12:01:50.873225 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:01:50.873232 kernel: x2apic enabled Jan 29 12:01:50.873239 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:01:50.873246 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 12:01:50.873253 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 12:01:50.873260 kernel: kvm-guest: setup PV IPIs Jan 29 12:01:50.873276 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:01:50.873284 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:01:50.873291 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 12:01:50.873298 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 12:01:50.873306 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 12:01:50.873315 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 12:01:50.873322 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:01:50.873330 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:01:50.873337 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:01:50.873345 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:01:50.873354 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 12:01:50.873362 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 12:01:50.873370 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:01:50.873377 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:01:50.873385 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 12:01:50.873393 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 12:01:50.873400 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 12:01:50.873407 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:01:50.873417 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:01:50.873424 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:01:50.873432 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:01:50.873440 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 12:01:50.873447 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:01:50.873454 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:01:50.873462 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:01:50.873469 kernel: landlock: Up and running. Jan 29 12:01:50.873476 kernel: SELinux: Initializing. Jan 29 12:01:50.873486 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:01:50.873493 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:01:50.873501 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 12:01:50.873508 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:01:50.873516 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:01:50.873523 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:01:50.873531 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 12:01:50.873538 kernel: ... version: 0 Jan 29 12:01:50.873545 kernel: ... bit width: 48 Jan 29 12:01:50.873555 kernel: ... generic registers: 6 Jan 29 12:01:50.873562 kernel: ... value mask: 0000ffffffffffff Jan 29 12:01:50.873569 kernel: ... max period: 00007fffffffffff Jan 29 12:01:50.873577 kernel: ... fixed-purpose events: 0 Jan 29 12:01:50.873584 kernel: ... event mask: 000000000000003f Jan 29 12:01:50.873591 kernel: signal: max sigframe size: 1776 Jan 29 12:01:50.873599 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:01:50.873606 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:01:50.873613 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:01:50.873623 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:01:50.873630 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 12:01:50.873637 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 12:01:50.873645 kernel: smpboot: Max logical packages: 1 Jan 29 12:01:50.873652 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 12:01:50.873659 kernel: devtmpfs: initialized Jan 29 12:01:50.873667 kernel: x86/mm: Memory block size: 128MB Jan 29 12:01:50.873674 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:01:50.873682 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 12:01:50.873691 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:01:50.873698 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:01:50.873706 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:01:50.873713 kernel: audit: type=2000 audit(1738152110.399:1): state=initialized audit_enabled=0 res=1 Jan 29 12:01:50.873720 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:01:50.873728 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:01:50.873735 kernel: cpuidle: using governor menu Jan 29 12:01:50.873742 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:01:50.873749 kernel: dca service started, version 1.12.1 Jan 29 12:01:50.873759 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 12:01:50.873766 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 12:01:50.873774 kernel: PCI: Using configuration type 1 for base access Jan 29 12:01:50.873781 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:01:50.873789 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:01:50.873796 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:01:50.873814 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:01:50.873822 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:01:50.873829 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:01:50.873839 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:01:50.873846 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:01:50.873854 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:01:50.873861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:01:50.873868 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:01:50.873876 kernel: ACPI: Interpreter enabled Jan 29 12:01:50.873883 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:01:50.873890 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:01:50.873898 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:01:50.873907 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:01:50.873915 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 12:01:50.873922 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:01:50.874104 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:01:50.874234 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 12:01:50.874355 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 12:01:50.874365 kernel: PCI host bridge to bus 0000:00 Jan 29 12:01:50.874492 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:01:50.874605 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:01:50.874715 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:01:50.874840 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 12:01:50.874952 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:01:50.875062 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 12:01:50.875181 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:01:50.875325 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 12:01:50.875460 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 12:01:50.875580 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 12:01:50.875701 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 12:01:50.875835 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 12:01:50.875958 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:01:50.876088 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:01:50.876225 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 12:01:50.876345 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 12:01:50.876469 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 12:01:50.876599 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:01:50.876721 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:01:50.876857 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 12:01:50.877004 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 12:01:50.877186 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:01:50.877357 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 12:01:50.877529 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 12:01:50.877678 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 12:01:50.877815 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 12:01:50.877946 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 12:01:50.878075 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 12:01:50.878238 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 12:01:50.878361 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 12:01:50.878482 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 12:01:50.878619 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 12:01:50.878741 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 12:01:50.878751 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:01:50.878763 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:01:50.878771 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:01:50.878778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:01:50.878786 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 12:01:50.878793 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 12:01:50.878813 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 12:01:50.878821 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 12:01:50.878828 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 12:01:50.878835 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 12:01:50.878846 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 12:01:50.878853 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 12:01:50.878861 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 12:01:50.878868 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 12:01:50.878875 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 12:01:50.878883 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 12:01:50.878890 kernel: iommu: Default domain type: Translated Jan 29 12:01:50.878898 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:01:50.878905 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:01:50.878915 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:01:50.878922 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:01:50.878930 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 12:01:50.879051 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 12:01:50.879181 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 12:01:50.879301 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:01:50.879311 kernel: vgaarb: loaded Jan 29 12:01:50.879318 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 12:01:50.879329 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 12:01:50.879337 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:01:50.879344 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:01:50.879352 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:01:50.879359 kernel: pnp: PnP ACPI init Jan 29 12:01:50.879493 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 12:01:50.879504 kernel: pnp: PnP ACPI: found 6 devices Jan 29 12:01:50.879512 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:01:50.879522 kernel: NET: Registered PF_INET protocol family Jan 29 12:01:50.879530 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:01:50.879537 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:01:50.879545 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:01:50.879552 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:01:50.879560 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:01:50.879568 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:01:50.879577 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:01:50.879588 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:01:50.879601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:01:50.879609 kernel: NET: Registered PF_XDP protocol family Jan 29 12:01:50.879724 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:01:50.879945 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:01:50.880057 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:01:50.880181 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 12:01:50.880293 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 12:01:50.880402 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 12:01:50.880416 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:01:50.880424 kernel: Initialise system trusted keyrings Jan 29 12:01:50.880431 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:01:50.880439 kernel: Key type asymmetric registered Jan 29 12:01:50.880446 kernel: Asymmetric key parser 'x509' registered Jan 29 12:01:50.880454 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:01:50.880461 kernel: io scheduler mq-deadline registered Jan 29 12:01:50.880468 kernel: io scheduler kyber registered Jan 29 12:01:50.880476 kernel: io scheduler bfq registered Jan 29 12:01:50.880486 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:01:50.880494 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 12:01:50.880501 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 12:01:50.880509 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 12:01:50.880516 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:01:50.880524 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:01:50.880532 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:01:50.880539 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:01:50.880547 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:01:50.880679 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:01:50.880694 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:01:50.880823 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:01:50.880939 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:01:50 UTC (1738152110) Jan 29 12:01:50.881051 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 12:01:50.881061 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:01:50.881068 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:01:50.881076 kernel: Segment Routing with IPv6 Jan 29 12:01:50.881087 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:01:50.881103 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:01:50.881112 kernel: Key type dns_resolver registered Jan 29 12:01:50.881119 kernel: IPI shorthand broadcast: enabled Jan 29 12:01:50.881126 kernel: sched_clock: Marking stable (560002556, 104474103)->(716882054, -52405395) Jan 29 12:01:50.881134 kernel: registered taskstats version 1 Jan 29 12:01:50.881141 kernel: Loading compiled-in X.509 certificates Jan 29 12:01:50.881149 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:01:50.881156 kernel: Key type .fscrypt registered Jan 29 12:01:50.881166 kernel: Key type fscrypt-provisioning registered Jan 29 12:01:50.881174 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:01:50.881181 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:01:50.881188 kernel: ima: No architecture policies found Jan 29 12:01:50.881196 kernel: clk: Disabling unused clocks Jan 29 12:01:50.881203 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:01:50.881210 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:01:50.881218 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:01:50.881225 kernel: Run /init as init process Jan 29 12:01:50.881235 kernel: with arguments: Jan 29 12:01:50.881242 kernel: /init Jan 29 12:01:50.881249 kernel: with environment: Jan 29 12:01:50.881257 kernel: HOME=/ Jan 29 12:01:50.881264 kernel: TERM=linux Jan 29 12:01:50.881271 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:01:50.881281 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:50.881291 systemd[1]: Detected virtualization kvm. Jan 29 12:01:50.881302 systemd[1]: Detected architecture x86-64. Jan 29 12:01:50.881309 systemd[1]: Running in initrd. Jan 29 12:01:50.881317 systemd[1]: No hostname configured, using default hostname. Jan 29 12:01:50.881325 systemd[1]: Hostname set to . Jan 29 12:01:50.881333 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:01:50.881341 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:01:50.881349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:50.881357 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:50.881368 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:01:50.881388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:50.881399 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:01:50.881407 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:01:50.881417 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:01:50.881428 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:01:50.881436 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:50.881444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:50.881452 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:50.881461 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:50.881469 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:50.881477 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:50.881485 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:50.881495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:50.881504 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:01:50.881512 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:01:50.881520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:50.881528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:50.881537 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:50.881545 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:50.881553 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:01:50.881561 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:50.881572 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:01:50.881583 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:01:50.881595 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:50.881605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:50.881614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:50.881622 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:50.881633 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:50.881641 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:01:50.881653 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:50.881680 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 12:01:50.881701 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:50.881709 systemd-journald[193]: Journal started Jan 29 12:01:50.881729 systemd-journald[193]: Runtime Journal (/run/log/journal/557a08ac939f450784928d9669e6df2a) is 6.0M, max 48.4M, 42.3M free. Jan 29 12:01:50.883830 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:50.883896 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:50.893144 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 12:01:50.922820 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:01:50.924428 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 12:01:50.925324 kernel: Bridge firewalling registered Jan 29 12:01:50.928334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:50.929720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:50.942997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:50.946060 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:50.947133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:50.948679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:50.959487 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:50.960794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:50.964402 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:50.981970 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:01:50.985319 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:50.994054 dracut-cmdline[227]: dracut-dracut-053 Jan 29 12:01:50.997370 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:51.020631 systemd-resolved[229]: Positive Trust Anchors: Jan 29 12:01:51.020648 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:51.020679 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:51.023179 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 29 12:01:51.024394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:51.029436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:51.092831 kernel: SCSI subsystem initialized Jan 29 12:01:51.104832 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:01:51.117830 kernel: iscsi: registered transport (tcp) Jan 29 12:01:51.139937 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:01:51.140016 kernel: QLogic iSCSI HBA Driver Jan 29 12:01:51.193839 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:51.207055 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:01:51.233650 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:01:51.233695 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:01:51.233726 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:01:51.274831 kernel: raid6: avx2x4 gen() 30471 MB/s Jan 29 12:01:51.291822 kernel: raid6: avx2x2 gen() 31018 MB/s Jan 29 12:01:51.308908 kernel: raid6: avx2x1 gen() 25346 MB/s Jan 29 12:01:51.308930 kernel: raid6: using algorithm avx2x2 gen() 31018 MB/s Jan 29 12:01:51.326908 kernel: raid6: .... xor() 19816 MB/s, rmw enabled Jan 29 12:01:51.326943 kernel: raid6: using avx2x2 recovery algorithm Jan 29 12:01:51.352833 kernel: xor: automatically using best checksumming function avx Jan 29 12:01:51.555842 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:01:51.569365 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:51.580978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:51.594953 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 29 12:01:51.600516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:51.617001 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:01:51.630165 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 29 12:01:51.664132 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:51.670957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:51.739477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:51.749975 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:01:51.763021 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:51.766548 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:51.767211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:51.767575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:51.776833 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:01:51.787826 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 12:01:51.816346 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:01:51.816361 kernel: AES CTR mode by8 optimization enabled Jan 29 12:01:51.816371 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 12:01:51.816510 kernel: libata version 3.00 loaded. Jan 29 12:01:51.816521 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:01:51.816539 kernel: GPT:9289727 != 19775487 Jan 29 12:01:51.816549 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:01:51.816559 kernel: GPT:9289727 != 19775487 Jan 29 12:01:51.816568 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:01:51.816578 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:01:51.784640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:01:51.799270 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:51.816173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:51.824946 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 12:01:51.858190 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 12:01:51.858210 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 12:01:51.858400 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 12:01:51.858640 kernel: scsi host0: ahci Jan 29 12:01:51.858824 kernel: scsi host1: ahci Jan 29 12:01:51.858970 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Jan 29 12:01:51.858982 kernel: scsi host2: ahci Jan 29 12:01:51.859132 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (457) Jan 29 12:01:51.859144 kernel: scsi host3: ahci Jan 29 12:01:51.859292 kernel: scsi host4: ahci Jan 29 12:01:51.859433 kernel: scsi host5: ahci Jan 29 12:01:51.859577 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 12:01:51.859589 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 12:01:51.859599 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 12:01:51.859621 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 12:01:51.859631 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 12:01:51.859641 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 12:01:51.816283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:51.818473 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:51.820470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:51.820591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:51.822682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:51.831103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:51.860304 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:01:51.892761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:51.901362 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:01:51.912352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:01:51.918250 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:01:51.921856 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:01:51.936083 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:01:51.938551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:51.945171 disk-uuid[565]: Primary Header is updated. Jan 29 12:01:51.945171 disk-uuid[565]: Secondary Entries is updated. Jan 29 12:01:51.945171 disk-uuid[565]: Secondary Header is updated. Jan 29 12:01:51.948820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:01:51.952830 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:01:51.963957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:52.168026 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 12:01:52.168097 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 12:01:52.168111 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 12:01:52.169830 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 12:01:52.169910 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 12:01:52.170830 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 12:01:52.171834 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 12:01:52.173151 kernel: ata3.00: applying bridge limits Jan 29 12:01:52.173167 kernel: ata3.00: configured for UDMA/100 Jan 29 12:01:52.173843 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 12:01:52.223834 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 12:01:52.237564 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:01:52.237579 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 12:01:52.954841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:01:52.954901 disk-uuid[567]: The operation has completed successfully. Jan 29 12:01:52.984055 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:01:52.984210 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:01:53.008978 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:01:53.014521 sh[591]: Success Jan 29 12:01:53.027829 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 12:01:53.062392 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:01:53.076237 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:01:53.079004 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:01:53.094869 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:01:53.094904 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:53.094919 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:01:53.094933 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:01:53.096272 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:01:53.100208 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:01:53.100968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:01:53.107074 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:01:53.109773 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:01:53.119406 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:53.119445 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:53.119460 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:01:53.121840 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:01:53.129889 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:01:53.131860 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:53.141659 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:01:53.151928 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:01:53.209351 ignition[687]: Ignition 2.19.0 Jan 29 12:01:53.209365 ignition[687]: Stage: fetch-offline Jan 29 12:01:53.209403 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:53.209413 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:01:53.209528 ignition[687]: parsed url from cmdline: "" Jan 29 12:01:53.209533 ignition[687]: no config URL provided Jan 29 12:01:53.209541 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:53.209554 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:53.209585 ignition[687]: op(1): [started] loading QEMU firmware config module Jan 29 12:01:53.209592 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 12:01:53.221818 ignition[687]: op(1): [finished] loading QEMU firmware config module Jan 29 12:01:53.226406 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:53.239980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:53.260747 systemd-networkd[781]: lo: Link UP Jan 29 12:01:53.260757 systemd-networkd[781]: lo: Gained carrier Jan 29 12:01:53.262595 systemd-networkd[781]: Enumeration completed Jan 29 12:01:53.262972 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:53.263373 systemd[1]: Reached target network.target - Network. Jan 29 12:01:53.263919 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:53.263924 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:53.265187 systemd-networkd[781]: eth0: Link UP Jan 29 12:01:53.265192 systemd-networkd[781]: eth0: Gained carrier Jan 29 12:01:53.265200 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:53.278021 ignition[687]: parsing config with SHA512: d7af6a7b9f9766f32399cbd8e5d87f8cefe7c5a8c26af9b520278495d8caf3b7240be68174dd2876ac86b2a9ef06771c1f921c1518da6cd4ead58559c3cd9276 Jan 29 12:01:53.283439 unknown[687]: fetched base config from "system" Jan 29 12:01:53.283456 unknown[687]: fetched user config from "qemu" Jan 29 12:01:53.284841 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:01:53.287792 ignition[687]: fetch-offline: fetch-offline passed Jan 29 12:01:53.288924 ignition[687]: Ignition finished successfully Jan 29 12:01:53.292667 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:53.293314 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:01:53.300020 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:01:53.313530 ignition[785]: Ignition 2.19.0 Jan 29 12:01:53.313542 ignition[785]: Stage: kargs Jan 29 12:01:53.313722 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:53.313736 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:01:53.314824 ignition[785]: kargs: kargs passed Jan 29 12:01:53.314878 ignition[785]: Ignition finished successfully Jan 29 12:01:53.322761 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:01:53.335032 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:01:53.347153 ignition[794]: Ignition 2.19.0 Jan 29 12:01:53.347165 ignition[794]: Stage: disks Jan 29 12:01:53.347322 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:53.347333 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:01:53.348452 ignition[794]: disks: disks passed Jan 29 12:01:53.348495 ignition[794]: Ignition finished successfully Jan 29 12:01:53.355433 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:01:53.355936 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:53.358403 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:01:53.361199 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:53.361632 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:53.367348 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:53.379970 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:01:53.394747 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.134 Jan 29 12:01:53.394761 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 29 12:01:53.396656 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:01:53.403581 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:01:53.410957 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:01:53.492831 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:01:53.493504 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:01:53.495042 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:01:53.504881 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:53.506577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:01:53.507759 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:01:53.513237 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 29 12:01:53.507796 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:01:53.518835 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:53.518853 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:53.518863 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:01:53.507833 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:53.514487 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:01:53.519656 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:01:53.524829 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:01:53.526206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:53.555834 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:01:53.561042 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:01:53.565677 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:01:53.570301 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:01:53.654368 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:53.677920 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:01:53.679096 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:01:53.689827 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:53.702987 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:01:53.712581 ignition[926]: INFO : Ignition 2.19.0 Jan 29 12:01:53.712581 ignition[926]: INFO : Stage: mount Jan 29 12:01:53.714216 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:53.714216 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:01:53.714216 ignition[926]: INFO : mount: mount passed Jan 29 12:01:53.714216 ignition[926]: INFO : Ignition finished successfully Jan 29 12:01:53.719422 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:01:53.727989 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:01:54.093106 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:01:54.106069 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:54.113836 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 29 12:01:54.113893 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:54.116323 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:54.116352 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:01:54.118839 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:01:54.121209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:54.149543 ignition[957]: INFO : Ignition 2.19.0 Jan 29 12:01:54.149543 ignition[957]: INFO : Stage: files Jan 29 12:01:54.151471 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:54.151471 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:01:54.154421 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:01:54.155772 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:01:54.155772 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:01:54.160202 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:01:54.161714 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:01:54.163495 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 12:01:54.164703 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:01:54.166892 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:01:54.168691 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:01:54.170354 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:54.172444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:01:54.210755 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:01:54.291770 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:54.294060 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:01:54.294060 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 12:01:54.772743 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 29 12:01:54.859275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:01:54.859275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:54.863149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:01:55.161172 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 29 12:01:55.265010 systemd-networkd[781]: eth0: Gained IPv6LL Jan 29 12:01:55.527370 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:55.527370 ignition[957]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 29 12:01:55.531810 ignition[957]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 12:01:55.557205 ignition[957]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:01:55.558919 ignition[957]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:01:55.558919 ignition[957]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 12:01:55.558919 ignition[957]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:55.558919 ignition[957]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:55.558919 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:55.558919 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:55.558919 ignition[957]: INFO : files: files passed Jan 29 12:01:55.558919 ignition[957]: INFO : Ignition finished successfully Jan 29 12:01:55.560404 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:01:55.570948 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:01:55.572864 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:01:55.575068 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:01:55.575175 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:01:55.582913 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 12:01:55.585274 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:55.586979 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:55.589839 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:55.588451 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:55.590022 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:01:55.599941 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:01:55.623960 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:01:55.624119 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:01:55.624872 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:01:55.627676 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:01:55.629624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:01:55.632659 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:01:55.650408 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:55.666927 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:01:55.677428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:55.678136 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:55.680128 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:01:55.680429 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:01:55.680558 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:55.685480 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:01:55.686252 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:01:55.688834 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:01:55.690504 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:55.690856 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:55.691214 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:01:55.691555 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:55.692089 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:01:55.700590 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:01:55.702493 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:01:55.702798 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:01:55.702943 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:55.707349 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:55.707913 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:55.711310 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:01:55.713339 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:55.715849 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:01:55.716009 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:55.718750 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:01:55.718918 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:55.719564 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:01:55.719829 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:01:55.726852 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:55.727388 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:01:55.730124 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:01:55.731640 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:01:55.731750 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:55.733389 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:01:55.733493 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:55.735130 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:01:55.735263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:55.737140 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:01:55.737262 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:01:55.748958 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:01:55.749402 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:01:55.749531 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:55.750525 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:01:55.753722 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:01:55.753925 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:55.755965 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:01:55.756094 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:55.763892 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:01:55.764034 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:01:55.778935 ignition[1011]: INFO : Ignition 2.19.0 Jan 29 12:01:55.778935 ignition[1011]: INFO : Stage: umount Jan 29 12:01:55.780854 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:55.780854 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:01:55.781120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:01:55.785326 ignition[1011]: INFO : umount: umount passed Jan 29 12:01:55.786263 ignition[1011]: INFO : Ignition finished successfully Jan 29 12:01:55.789376 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:01:55.789525 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:01:55.791947 systemd[1]: Stopped target network.target - Network. Jan 29 12:01:55.792279 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:01:55.792337 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:01:55.792635 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:01:55.792683 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:01:55.793145 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:01:55.793195 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:01:55.798070 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:01:55.798123 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:55.798512 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:01:55.801760 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:01:55.811094 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 29 12:01:55.812932 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:01:55.813110 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:01:55.813865 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:01:55.813996 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:01:55.818064 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:01:55.818127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:55.825912 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:01:55.827024 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:01:55.827096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:55.829456 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:01:55.829506 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:55.831762 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:01:55.831845 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:55.834281 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:01:55.834347 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:55.836737 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:55.849259 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:01:55.849416 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:01:55.854495 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:01:55.854713 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:55.856960 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:01:55.857028 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:55.857284 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:01:55.857325 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:55.857575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:01:55.857628 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:55.863524 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:01:55.863587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:55.865885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:55.865939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:55.876928 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:01:55.898226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:01:55.898297 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:55.900528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:55.900588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:55.905927 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:01:55.906075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:01:55.996096 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:01:55.996247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:01:55.998574 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:01:55.999330 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:01:55.999396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:56.003929 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:01:56.012990 systemd[1]: Switching root. Jan 29 12:01:56.047906 systemd-journald[193]: Journal stopped Jan 29 12:01:57.321184 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 12:01:57.322315 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:01:57.322334 kernel: SELinux: policy capability open_perms=1 Jan 29 12:01:57.322356 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:01:57.322373 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:01:57.322388 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:01:57.322399 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:01:57.322417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:01:57.322428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:01:57.322439 kernel: audit: type=1403 audit(1738152116.535:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:01:57.322452 systemd[1]: Successfully loaded SELinux policy in 47.700ms. Jan 29 12:01:57.322470 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.664ms. Jan 29 12:01:57.322483 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:57.322496 systemd[1]: Detected virtualization kvm. Jan 29 12:01:57.322511 systemd[1]: Detected architecture x86-64. Jan 29 12:01:57.322524 systemd[1]: Detected first boot. Jan 29 12:01:57.322540 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:01:57.322551 zram_generator::config[1075]: No configuration found. Jan 29 12:01:57.322565 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:01:57.322577 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:01:57.322589 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:01:57.322602 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:01:57.322617 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:01:57.322628 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:01:57.322642 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:01:57.322654 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:01:57.322665 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:01:57.322677 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:01:57.322689 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:01:57.322701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:57.322713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:57.322727 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:01:57.322739 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:01:57.322751 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:01:57.322763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:57.322775 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:01:57.322787 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:57.322799 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:01:57.322823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:57.322835 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:57.322850 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:57.322862 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:57.322874 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:01:57.322886 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:01:57.322898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:01:57.322911 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:01:57.322923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:57.322935 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:57.322957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:57.322968 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:01:57.322980 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:01:57.322992 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:01:57.323004 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:01:57.323016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:57.323027 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:01:57.323039 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:01:57.323051 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:01:57.323065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:01:57.323077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:57.323094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:57.323107 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:01:57.323120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:57.323132 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:01:57.323144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:57.323156 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:01:57.323171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:57.323184 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:01:57.323197 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:01:57.323210 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:01:57.323223 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:57.323235 kernel: loop: module loaded Jan 29 12:01:57.323246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:57.323258 kernel: fuse: init (API version 7.39) Jan 29 12:01:57.323270 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:01:57.323285 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:01:57.323297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:57.323310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:57.323322 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:01:57.323335 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:01:57.323348 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:01:57.323377 systemd-journald[1155]: Collecting audit messages is disabled. Jan 29 12:01:57.323402 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:01:57.323414 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:01:57.323426 systemd-journald[1155]: Journal started Jan 29 12:01:57.323448 systemd-journald[1155]: Runtime Journal (/run/log/journal/557a08ac939f450784928d9669e6df2a) is 6.0M, max 48.4M, 42.3M free. Jan 29 12:01:57.326886 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:57.328381 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:01:57.329869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:57.331520 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:01:57.331784 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:01:57.333363 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:57.333608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:57.335135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:57.335375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:57.337274 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:01:57.337516 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:01:57.339707 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:57.340115 kernel: ACPI: bus type drm_connector registered Jan 29 12:01:57.339977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:57.341444 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:01:57.342338 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:01:57.344360 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:57.346925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:01:57.349267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:01:57.351031 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:01:57.369979 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:01:57.386959 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:01:57.389500 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:01:57.390787 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:01:57.394321 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:01:57.398427 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:01:57.400142 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:01:57.402824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:01:57.404347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:01:57.407970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:57.410632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:57.415258 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:57.417251 systemd-journald[1155]: Time spent on flushing to /var/log/journal/557a08ac939f450784928d9669e6df2a is 16.327ms for 946 entries. Jan 29 12:01:57.417251 systemd-journald[1155]: System Journal (/var/log/journal/557a08ac939f450784928d9669e6df2a) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:01:57.455361 systemd-journald[1155]: Received client request to flush runtime journal. Jan 29 12:01:57.417530 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:01:57.419664 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:01:57.438072 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:01:57.440435 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:01:57.442734 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:57.445255 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:01:57.457308 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 12:01:57.457325 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 12:01:57.457427 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:01:57.460258 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:01:57.465256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:57.473983 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:01:57.499985 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:01:57.509134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:57.526690 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 29 12:01:57.526714 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 29 12:01:57.532875 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:57.963326 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:01:57.977020 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:58.001521 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Jan 29 12:01:58.018122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:58.031081 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:58.048041 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:01:58.053654 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 12:01:58.082841 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1255) Jan 29 12:01:58.095831 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:01:58.100848 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:01:58.115361 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:01:58.133348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:01:58.142936 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 12:01:58.153655 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 12:01:58.158181 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 12:01:58.158432 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 12:01:58.206831 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:01:58.208229 systemd-networkd[1243]: lo: Link UP Jan 29 12:01:58.208729 systemd-networkd[1243]: lo: Gained carrier Jan 29 12:01:58.210698 systemd-networkd[1243]: Enumeration completed Jan 29 12:01:58.211212 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:58.211217 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:58.212136 systemd-networkd[1243]: eth0: Link UP Jan 29 12:01:58.212141 systemd-networkd[1243]: eth0: Gained carrier Jan 29 12:01:58.212155 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:58.246219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:58.248220 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:58.254875 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:01:58.259859 systemd-networkd[1243]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:01:58.277144 kernel: kvm_amd: TSC scaling supported Jan 29 12:01:58.277198 kernel: kvm_amd: Nested Virtualization enabled Jan 29 12:01:58.277214 kernel: kvm_amd: Nested Paging enabled Jan 29 12:01:58.277232 kernel: kvm_amd: LBR virtualization supported Jan 29 12:01:58.278393 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 12:01:58.278416 kernel: kvm_amd: Virtual GIF supported Jan 29 12:01:58.300829 kernel: EDAC MC: Ver: 3.0.0 Jan 29 12:01:58.337489 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:01:58.351988 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:01:58.354021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:58.361398 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:01:58.393230 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:01:58.394997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:58.402959 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:01:58.408964 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:01:58.443954 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:01:58.445422 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:01:58.446693 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:01:58.446720 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:58.447769 systemd[1]: Reached target machines.target - Containers. Jan 29 12:01:58.449776 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:01:58.462925 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:01:58.465454 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:01:58.466739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:58.467658 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:01:58.470131 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:01:58.475017 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:01:58.477520 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:01:58.485841 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 12:01:58.521242 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:01:58.528240 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:01:58.574828 kernel: loop1: detected capacity change from 0 to 142488 Jan 29 12:01:58.612407 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 12:01:58.621158 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:01:58.622129 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:01:58.659039 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 12:01:58.667868 kernel: loop4: detected capacity change from 0 to 142488 Jan 29 12:01:58.680843 kernel: loop5: detected capacity change from 0 to 140768 Jan 29 12:01:58.690629 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 12:01:58.691323 (sd-merge)[1307]: Merged extensions into '/usr'. Jan 29 12:01:58.696950 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:01:58.696969 systemd[1]: Reloading... Jan 29 12:01:58.763842 zram_generator::config[1338]: No configuration found. Jan 29 12:01:58.787997 ldconfig[1292]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:01:58.889739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:58.954290 systemd[1]: Reloading finished in 256 ms. Jan 29 12:01:58.973255 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:01:58.974896 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:01:58.992986 systemd[1]: Starting ensure-sysext.service... Jan 29 12:01:58.995226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:58.999755 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:01:58.999773 systemd[1]: Reloading... Jan 29 12:01:59.018262 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:01:59.018625 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:01:59.019688 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:01:59.020012 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 12:01:59.020092 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 12:01:59.027627 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:01:59.027641 systemd-tmpfiles[1380]: Skipping /boot Jan 29 12:01:59.038259 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:01:59.038369 systemd-tmpfiles[1380]: Skipping /boot Jan 29 12:01:59.044833 zram_generator::config[1408]: No configuration found. Jan 29 12:01:59.166530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:59.231471 systemd[1]: Reloading finished in 231 ms. Jan 29 12:01:59.233076 systemd-networkd[1243]: eth0: Gained IPv6LL Jan 29 12:01:59.252122 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:01:59.262531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:59.269952 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:01:59.272479 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:01:59.274940 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:01:59.278936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:59.282152 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:01:59.286940 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:59.287460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:59.291158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:59.294413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:59.297724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:59.299961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:59.300153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:59.302343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:59.302588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:59.304786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:59.305092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:59.307161 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:59.307436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:59.325157 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:01:59.325520 augenrules[1486]: No rules Jan 29 12:01:59.327558 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:01:59.335149 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:01:59.339467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:59.339675 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:59.350995 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:59.354914 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:01:59.359016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:59.363948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:59.365108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:59.368066 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:01:59.369336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:59.370449 systemd[1]: Finished ensure-sysext.service. Jan 29 12:01:59.371798 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:01:59.373677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:59.373928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:59.375631 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:01:59.375860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:01:59.377276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:59.377490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:59.379054 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:59.379289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:59.383085 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:01:59.390537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:01:59.390646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:01:59.392019 systemd-resolved[1459]: Positive Trust Anchors: Jan 29 12:01:59.392035 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:59.392067 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:59.395482 systemd-resolved[1459]: Defaulting to hostname 'linux'. Jan 29 12:01:59.398016 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:01:59.399206 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:01:59.399372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:59.400750 systemd[1]: Reached target network.target - Network. Jan 29 12:01:59.401669 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:01:59.402755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:59.461121 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:01:59.462519 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:59.961943 systemd-resolved[1459]: Clock change detected. Flushing caches. Jan 29 12:01:59.961967 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 12:01:59.962006 systemd-timesyncd[1518]: Initial clock synchronization to Wed 2025-01-29 12:01:59.961870 UTC. Jan 29 12:01:59.962825 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:01:59.964107 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:01:59.965384 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:01:59.966637 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:01:59.966656 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:59.967557 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:01:59.968862 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:01:59.970103 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:01:59.971347 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:59.972783 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:01:59.975959 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:01:59.978339 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:01:59.984225 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:01:59.985402 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:59.986381 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:59.987477 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:01:59.987516 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:01:59.987539 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:01:59.988847 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:01:59.991090 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:01:59.993264 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:01:59.997794 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:02:00.003240 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:02:00.004513 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:02:00.006579 jq[1525]: false Jan 29 12:02:00.007723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:00.012982 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:02:00.017047 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:02:00.022516 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:02:00.025980 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:02:00.026933 dbus-daemon[1524]: [system] SELinux support is enabled Jan 29 12:02:00.030108 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:02:00.035616 extend-filesystems[1527]: Found loop3 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found loop4 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found loop5 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found sr0 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda1 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda2 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda3 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found usr Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda4 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda6 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda7 Jan 29 12:02:00.035616 extend-filesystems[1527]: Found vda9 Jan 29 12:02:00.035616 extend-filesystems[1527]: Checking size of /dev/vda9 Jan 29 12:02:00.074922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1239) Jan 29 12:02:00.075079 extend-filesystems[1527]: Resized partition /dev/vda9 Jan 29 12:02:00.037048 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:02:00.041270 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:02:00.042932 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:02:00.046460 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:02:00.048699 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:02:00.076870 jq[1554]: true Jan 29 12:02:00.084408 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 12:02:00.063551 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:02:00.084610 extend-filesystems[1564]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:02:00.065036 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:02:00.092559 update_engine[1551]: I20250129 12:02:00.087414 1551 main.cc:92] Flatcar Update Engine starting Jan 29 12:02:00.066605 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:02:00.071134 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:02:00.084714 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:02:00.088232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:02:00.088556 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:02:00.098072 update_engine[1551]: I20250129 12:02:00.097259 1551 update_check_scheduler.cc:74] Next update check in 10m9s Jan 29 12:02:00.120853 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 12:02:00.111806 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:02:00.113296 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:02:00.145497 tar[1567]: linux-amd64/helm Jan 29 12:02:00.145743 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:02:00.145743 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:02:00.145743 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 12:02:00.167654 jq[1570]: true Jan 29 12:02:00.113710 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:02:00.169087 extend-filesystems[1527]: Resized filesystem in /dev/vda9 Jan 29 12:02:00.145498 systemd-logind[1547]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:02:00.189525 bash[1607]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:02:00.145519 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:02:00.147472 systemd-logind[1547]: New seat seat0. Jan 29 12:02:00.147616 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:02:00.148608 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:02:00.156030 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:02:00.163969 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:02:00.172258 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:02:00.172458 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:02:00.172582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:02:00.176349 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:02:00.176466 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:02:00.177448 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:02:00.188762 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:02:00.197596 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:02:00.200852 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:02:00.244884 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:02:00.265290 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:02:00.295936 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:02:00.304072 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:02:00.313374 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:02:00.313742 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:02:00.322552 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:02:00.337398 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:02:00.348282 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:02:00.350670 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:02:00.352708 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:02:00.358445 containerd[1571]: time="2025-01-29T12:02:00.358376531Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:02:00.382550 containerd[1571]: time="2025-01-29T12:02:00.382409433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:00.384327 containerd[1571]: time="2025-01-29T12:02:00.384292143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:00.384327 containerd[1571]: time="2025-01-29T12:02:00.384320165Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:02:00.384327 containerd[1571]: time="2025-01-29T12:02:00.384335765Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:02:00.384714 containerd[1571]: time="2025-01-29T12:02:00.384506495Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:02:00.384714 containerd[1571]: time="2025-01-29T12:02:00.384534056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:00.384714 containerd[1571]: time="2025-01-29T12:02:00.384598417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:00.384714 containerd[1571]: time="2025-01-29T12:02:00.384612243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:00.384916 containerd[1571]: time="2025-01-29T12:02:00.384880396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:00.385033 containerd[1571]: time="2025-01-29T12:02:00.385006713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:00.385033 containerd[1571]: time="2025-01-29T12:02:00.385028373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:00.385081 containerd[1571]: time="2025-01-29T12:02:00.385039003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:00.385162 containerd[1571]: time="2025-01-29T12:02:00.385139291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:00.385405 containerd[1571]: time="2025-01-29T12:02:00.385377758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:00.385582 containerd[1571]: time="2025-01-29T12:02:00.385556744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:00.385582 containerd[1571]: time="2025-01-29T12:02:00.385574066Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:02:00.385700 containerd[1571]: time="2025-01-29T12:02:00.385683021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:02:00.385759 containerd[1571]: time="2025-01-29T12:02:00.385743765Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:02:00.392738 containerd[1571]: time="2025-01-29T12:02:00.392641718Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:02:00.392738 containerd[1571]: time="2025-01-29T12:02:00.392682284Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:02:00.392738 containerd[1571]: time="2025-01-29T12:02:00.392703584Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:02:00.392738 containerd[1571]: time="2025-01-29T12:02:00.392719284Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:02:00.392738 containerd[1571]: time="2025-01-29T12:02:00.392733310Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:02:00.392969 containerd[1571]: time="2025-01-29T12:02:00.392887499Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:02:00.393231 containerd[1571]: time="2025-01-29T12:02:00.393207689Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:02:00.393417 containerd[1571]: time="2025-01-29T12:02:00.393395371Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:02:00.393417 containerd[1571]: time="2025-01-29T12:02:00.393416721Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:02:00.393474 containerd[1571]: time="2025-01-29T12:02:00.393431178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:02:00.393474 containerd[1571]: time="2025-01-29T12:02:00.393445656Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393474 containerd[1571]: time="2025-01-29T12:02:00.393459522Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393474 containerd[1571]: time="2025-01-29T12:02:00.393472857Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393543 containerd[1571]: time="2025-01-29T12:02:00.393488135Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393543 containerd[1571]: time="2025-01-29T12:02:00.393503324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393543 containerd[1571]: time="2025-01-29T12:02:00.393516098Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393543 containerd[1571]: time="2025-01-29T12:02:00.393527890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393543 containerd[1571]: time="2025-01-29T12:02:00.393538560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:02:00.393635 containerd[1571]: time="2025-01-29T12:02:00.393556924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393635 containerd[1571]: time="2025-01-29T12:02:00.393573906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393635 containerd[1571]: time="2025-01-29T12:02:00.393586850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393635 containerd[1571]: time="2025-01-29T12:02:00.393598622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393635 containerd[1571]: time="2025-01-29T12:02:00.393610645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393635 containerd[1571]: time="2025-01-29T12:02:00.393627807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393640060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393652774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393675186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393695504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393710723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393723817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393737172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393752 containerd[1571]: time="2025-01-29T12:02:00.393752992Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393773520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393786044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393796754Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393859592Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393875782Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393886292Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393907512Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:02:00.393919 containerd[1571]: time="2025-01-29T12:02:00.393917380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.394060 containerd[1571]: time="2025-01-29T12:02:00.393931597Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:02:00.394060 containerd[1571]: time="2025-01-29T12:02:00.393941696Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:02:00.394060 containerd[1571]: time="2025-01-29T12:02:00.393951304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:02:00.394246 containerd[1571]: time="2025-01-29T12:02:00.394193127Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:02:00.394443 containerd[1571]: time="2025-01-29T12:02:00.394423419Z" level=info msg="Connect containerd service" Jan 29 12:02:00.394484 containerd[1571]: time="2025-01-29T12:02:00.394463093Z" level=info msg="using legacy CRI server" Jan 29 12:02:00.394484 containerd[1571]: time="2025-01-29T12:02:00.394470958Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.394548383Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395098004Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395300924Z" level=info msg="Start subscribing containerd event" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395390392Z" level=info msg="Start recovering state" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395411932Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395461515Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395486763Z" level=info msg="Start event monitor" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395507762Z" level=info msg="Start snapshots syncer" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395518572Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:02:00.395827 containerd[1571]: time="2025-01-29T12:02:00.395530274Z" level=info msg="Start streaming server" Jan 29 12:02:00.395735 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:02:00.396887 containerd[1571]: time="2025-01-29T12:02:00.396867331Z" level=info msg="containerd successfully booted in 0.039480s" Jan 29 12:02:00.554381 tar[1567]: linux-amd64/LICENSE Jan 29 12:02:00.554486 tar[1567]: linux-amd64/README.md Jan 29 12:02:00.567881 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:02:00.815933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:00.817593 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:02:00.818866 systemd[1]: Startup finished in 6.527s (kernel) + 3.830s (userspace) = 10.358s. Jan 29 12:02:00.838403 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:01.279404 kubelet[1658]: E0129 12:02:01.279211 1658 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:01.283425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:01.283711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:09.079404 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:02:09.093022 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:43074.service - OpenSSH per-connection server daemon (10.0.0.1:43074). Jan 29 12:02:09.127670 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 43074 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:09.129874 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:09.138022 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:02:09.148012 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:02:09.149554 systemd-logind[1547]: New session 1 of user core. Jan 29 12:02:09.160513 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:02:09.162899 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:02:09.170294 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:02:09.269300 systemd[1678]: Queued start job for default target default.target. Jan 29 12:02:09.269732 systemd[1678]: Created slice app.slice - User Application Slice. Jan 29 12:02:09.269750 systemd[1678]: Reached target paths.target - Paths. Jan 29 12:02:09.269762 systemd[1678]: Reached target timers.target - Timers. Jan 29 12:02:09.281909 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:02:09.288515 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:02:09.288585 systemd[1678]: Reached target sockets.target - Sockets. Jan 29 12:02:09.288600 systemd[1678]: Reached target basic.target - Basic System. Jan 29 12:02:09.288641 systemd[1678]: Reached target default.target - Main User Target. Jan 29 12:02:09.288673 systemd[1678]: Startup finished in 111ms. Jan 29 12:02:09.289695 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:02:09.291253 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:02:09.349227 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:43082.service - OpenSSH per-connection server daemon (10.0.0.1:43082). Jan 29 12:02:09.380832 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 43082 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:09.382704 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:09.387970 systemd-logind[1547]: New session 2 of user core. Jan 29 12:02:09.401081 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:02:09.454193 sshd[1690]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:09.472071 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:43094.service - OpenSSH per-connection server daemon (10.0.0.1:43094). Jan 29 12:02:09.472645 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:43082.service: Deactivated successfully. Jan 29 12:02:09.475583 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:02:09.477113 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:02:09.477666 systemd-logind[1547]: Removed session 2. Jan 29 12:02:09.498048 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 43094 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:09.499489 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:09.503475 systemd-logind[1547]: New session 3 of user core. Jan 29 12:02:09.513070 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:02:09.562134 sshd[1695]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:09.575022 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:43108.service - OpenSSH per-connection server daemon (10.0.0.1:43108). Jan 29 12:02:09.575461 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:43094.service: Deactivated successfully. Jan 29 12:02:09.577835 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:02:09.578923 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:02:09.579712 systemd-logind[1547]: Removed session 3. Jan 29 12:02:09.601634 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 43108 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:09.603107 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:09.607587 systemd-logind[1547]: New session 4 of user core. Jan 29 12:02:09.618127 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:02:09.671981 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:09.689021 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:43120.service - OpenSSH per-connection server daemon (10.0.0.1:43120). Jan 29 12:02:09.689454 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:43108.service: Deactivated successfully. Jan 29 12:02:09.692180 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:02:09.693305 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:02:09.694267 systemd-logind[1547]: Removed session 4. Jan 29 12:02:09.715152 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 43120 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:09.716439 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:09.720253 systemd-logind[1547]: New session 5 of user core. Jan 29 12:02:09.727045 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:02:09.785073 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:02:09.785405 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:09.801491 sudo[1718]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:09.803404 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:09.821018 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:43132.service - OpenSSH per-connection server daemon (10.0.0.1:43132). Jan 29 12:02:09.821447 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:43120.service: Deactivated successfully. Jan 29 12:02:09.823999 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:02:09.825018 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:02:09.825935 systemd-logind[1547]: Removed session 5. Jan 29 12:02:09.847846 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 43132 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:09.849917 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:09.854190 systemd-logind[1547]: New session 6 of user core. Jan 29 12:02:09.864063 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:02:09.918468 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:02:09.918822 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:09.922540 sudo[1728]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:09.928702 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:02:09.929050 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:09.952058 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:09.953757 auditctl[1731]: No rules Jan 29 12:02:09.955135 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:02:09.955459 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:09.957411 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:09.987569 augenrules[1750]: No rules Jan 29 12:02:09.989416 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:09.990660 sudo[1727]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:09.992500 sshd[1720]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:10.007121 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:43142.service - OpenSSH per-connection server daemon (10.0.0.1:43142). Jan 29 12:02:10.007731 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:43132.service: Deactivated successfully. Jan 29 12:02:10.010513 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:02:10.011730 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:02:10.012561 systemd-logind[1547]: Removed session 6. Jan 29 12:02:10.034419 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 43142 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:10.035684 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:10.039634 systemd-logind[1547]: New session 7 of user core. Jan 29 12:02:10.048035 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:02:10.100198 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:02:10.100545 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:10.387077 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:02:10.387298 (dockerd)[1782]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:02:10.653288 dockerd[1782]: time="2025-01-29T12:02:10.653135391Z" level=info msg="Starting up" Jan 29 12:02:11.262570 dockerd[1782]: time="2025-01-29T12:02:11.262512934Z" level=info msg="Loading containers: start." Jan 29 12:02:11.293661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:02:11.300109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:11.467797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:11.472711 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:11.555146 kubelet[1852]: E0129 12:02:11.554952 1852 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:11.561926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:11.562179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:11.631836 kernel: Initializing XFRM netlink socket Jan 29 12:02:11.711776 systemd-networkd[1243]: docker0: Link UP Jan 29 12:02:11.738658 dockerd[1782]: time="2025-01-29T12:02:11.738592603Z" level=info msg="Loading containers: done." Jan 29 12:02:11.752936 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1306405207-merged.mount: Deactivated successfully. Jan 29 12:02:11.755754 dockerd[1782]: time="2025-01-29T12:02:11.755707554Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:02:11.755858 dockerd[1782]: time="2025-01-29T12:02:11.755841154Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:02:11.755997 dockerd[1782]: time="2025-01-29T12:02:11.755969144Z" level=info msg="Daemon has completed initialization" Jan 29 12:02:11.791750 dockerd[1782]: time="2025-01-29T12:02:11.791681765Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:02:11.793039 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:02:12.595414 containerd[1571]: time="2025-01-29T12:02:12.595354702Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:02:13.330184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143442551.mount: Deactivated successfully. Jan 29 12:02:14.299951 containerd[1571]: time="2025-01-29T12:02:14.299879836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:14.300805 containerd[1571]: time="2025-01-29T12:02:14.300725602Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 12:02:14.301935 containerd[1571]: time="2025-01-29T12:02:14.301903691Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:14.304960 containerd[1571]: time="2025-01-29T12:02:14.304928723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:14.305954 containerd[1571]: time="2025-01-29T12:02:14.305918388Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.710521247s" Jan 29 12:02:14.305954 containerd[1571]: time="2025-01-29T12:02:14.305952242Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:02:14.330202 containerd[1571]: time="2025-01-29T12:02:14.330157286Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:02:15.680786 containerd[1571]: time="2025-01-29T12:02:15.680718147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:15.682033 containerd[1571]: time="2025-01-29T12:02:15.681416126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 12:02:15.682592 containerd[1571]: time="2025-01-29T12:02:15.682554511Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:15.685588 containerd[1571]: time="2025-01-29T12:02:15.685544397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:15.686636 containerd[1571]: time="2025-01-29T12:02:15.686591340Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.356386935s" Jan 29 12:02:15.686636 containerd[1571]: time="2025-01-29T12:02:15.686631665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:02:15.711957 containerd[1571]: time="2025-01-29T12:02:15.711909111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:02:16.884128 containerd[1571]: time="2025-01-29T12:02:16.884058250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:16.884872 containerd[1571]: time="2025-01-29T12:02:16.884802335Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 12:02:16.886109 containerd[1571]: time="2025-01-29T12:02:16.886058540Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:16.889403 containerd[1571]: time="2025-01-29T12:02:16.889356504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:16.890520 containerd[1571]: time="2025-01-29T12:02:16.890475312Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.178515366s" Jan 29 12:02:16.890561 containerd[1571]: time="2025-01-29T12:02:16.890528592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:02:16.917750 containerd[1571]: time="2025-01-29T12:02:16.917707162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:02:17.948418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994337092.mount: Deactivated successfully. Jan 29 12:02:18.705159 containerd[1571]: time="2025-01-29T12:02:18.705087139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:18.705995 containerd[1571]: time="2025-01-29T12:02:18.705958943Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 12:02:18.707388 containerd[1571]: time="2025-01-29T12:02:18.707357455Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:18.709751 containerd[1571]: time="2025-01-29T12:02:18.709685470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:18.710227 containerd[1571]: time="2025-01-29T12:02:18.710182893Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.792427591s" Jan 29 12:02:18.710259 containerd[1571]: time="2025-01-29T12:02:18.710227397Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:02:18.733599 containerd[1571]: time="2025-01-29T12:02:18.733558312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:02:19.311214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126453877.mount: Deactivated successfully. Jan 29 12:02:19.984708 containerd[1571]: time="2025-01-29T12:02:19.984654206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:19.985543 containerd[1571]: time="2025-01-29T12:02:19.985507967Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 12:02:19.986899 containerd[1571]: time="2025-01-29T12:02:19.986845736Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:19.989589 containerd[1571]: time="2025-01-29T12:02:19.989544095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:19.990833 containerd[1571]: time="2025-01-29T12:02:19.990790242Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.257191273s" Jan 29 12:02:19.990833 containerd[1571]: time="2025-01-29T12:02:19.990826029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:02:20.011104 containerd[1571]: time="2025-01-29T12:02:20.011064997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:02:20.506729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021297386.mount: Deactivated successfully. Jan 29 12:02:20.511957 containerd[1571]: time="2025-01-29T12:02:20.511916804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:20.512733 containerd[1571]: time="2025-01-29T12:02:20.512681377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 12:02:20.513846 containerd[1571]: time="2025-01-29T12:02:20.513803030Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:20.516375 containerd[1571]: time="2025-01-29T12:02:20.516339236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:20.517276 containerd[1571]: time="2025-01-29T12:02:20.517223955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 506.113232ms" Jan 29 12:02:20.517276 containerd[1571]: time="2025-01-29T12:02:20.517271283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:02:20.538759 containerd[1571]: time="2025-01-29T12:02:20.538717896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:02:21.168202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466065154.mount: Deactivated successfully. Jan 29 12:02:21.812496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:02:21.828023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:21.967838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:21.972770 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:22.424009 kubelet[2169]: E0129 12:02:22.423878 2169 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:22.429187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:22.429499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:23.381960 containerd[1571]: time="2025-01-29T12:02:23.381890153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:23.382658 containerd[1571]: time="2025-01-29T12:02:23.382603040Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 12:02:23.383894 containerd[1571]: time="2025-01-29T12:02:23.383865767Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:23.386778 containerd[1571]: time="2025-01-29T12:02:23.386751879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:23.388305 containerd[1571]: time="2025-01-29T12:02:23.388261369Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.849495043s" Jan 29 12:02:23.388337 containerd[1571]: time="2025-01-29T12:02:23.388309259Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:02:25.694386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:25.702001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:25.718838 systemd[1]: Reloading requested from client PID 2267 ('systemctl') (unit session-7.scope)... Jan 29 12:02:25.718853 systemd[1]: Reloading... Jan 29 12:02:25.786117 zram_generator::config[2307]: No configuration found. Jan 29 12:02:26.143949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:02:26.216449 systemd[1]: Reloading finished in 497 ms. Jan 29 12:02:26.274858 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:02:26.274991 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:02:26.275383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:26.295047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:26.451836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:26.462307 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:02:26.504360 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:02:26.504360 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:02:26.504360 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:02:26.505545 kubelet[2366]: I0129 12:02:26.505494 2366 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:02:26.751234 kubelet[2366]: I0129 12:02:26.751128 2366 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:02:26.751234 kubelet[2366]: I0129 12:02:26.751157 2366 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:02:26.751389 kubelet[2366]: I0129 12:02:26.751370 2366 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:02:26.766327 kubelet[2366]: I0129 12:02:26.766273 2366 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:02:26.767346 kubelet[2366]: E0129 12:02:26.767310 2366 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.777185 kubelet[2366]: I0129 12:02:26.777156 2366 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:02:26.777581 kubelet[2366]: I0129 12:02:26.777549 2366 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:02:26.777737 kubelet[2366]: I0129 12:02:26.777574 2366 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:02:26.778185 kubelet[2366]: I0129 12:02:26.778161 2366 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:02:26.778185 kubelet[2366]: I0129 12:02:26.778179 2366 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:02:26.778346 kubelet[2366]: I0129 12:02:26.778324 2366 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:02:26.778917 kubelet[2366]: I0129 12:02:26.778894 2366 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:02:26.778917 kubelet[2366]: I0129 12:02:26.778911 2366 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:02:26.778961 kubelet[2366]: I0129 12:02:26.778932 2366 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:02:26.778961 kubelet[2366]: I0129 12:02:26.778951 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:02:26.779458 kubelet[2366]: W0129 12:02:26.779405 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.779458 kubelet[2366]: E0129 12:02:26.779458 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.780295 kubelet[2366]: W0129 12:02:26.780247 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.780390 kubelet[2366]: E0129 12:02:26.780367 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.783137 kubelet[2366]: I0129 12:02:26.783105 2366 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:02:26.784241 kubelet[2366]: I0129 12:02:26.784218 2366 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:02:26.784295 kubelet[2366]: W0129 12:02:26.784263 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:02:26.784883 kubelet[2366]: I0129 12:02:26.784796 2366 server.go:1264] "Started kubelet" Jan 29 12:02:26.784985 kubelet[2366]: I0129 12:02:26.784952 2366 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:02:26.786017 kubelet[2366]: I0129 12:02:26.785903 2366 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:02:26.786802 kubelet[2366]: I0129 12:02:26.786431 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:02:26.786802 kubelet[2366]: I0129 12:02:26.786702 2366 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:02:26.787613 kubelet[2366]: I0129 12:02:26.787525 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:02:26.788384 kubelet[2366]: E0129 12:02:26.788044 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:26.788384 kubelet[2366]: I0129 12:02:26.788081 2366 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:02:26.788384 kubelet[2366]: I0129 12:02:26.788161 2366 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:02:26.788384 kubelet[2366]: I0129 12:02:26.788208 2366 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:02:26.788490 kubelet[2366]: W0129 12:02:26.788428 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.788490 kubelet[2366]: E0129 12:02:26.788456 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.789389 kubelet[2366]: E0129 12:02:26.789361 2366 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:02:26.789454 kubelet[2366]: E0129 12:02:26.789409 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Jan 29 12:02:26.789917 kubelet[2366]: I0129 12:02:26.789899 2366 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:02:26.789988 kubelet[2366]: I0129 12:02:26.789972 2366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:02:26.790894 kubelet[2366]: I0129 12:02:26.790875 2366 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:02:26.792116 kubelet[2366]: E0129 12:02:26.792022 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f282b1ab072ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:02:26.784776875 +0000 UTC m=+0.318598473,LastTimestamp:2025-01-29 12:02:26.784776875 +0000 UTC m=+0.318598473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:02:26.803307 kubelet[2366]: I0129 12:02:26.803152 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:02:26.804632 kubelet[2366]: I0129 12:02:26.804597 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:02:26.804632 kubelet[2366]: I0129 12:02:26.804627 2366 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:02:26.804707 kubelet[2366]: I0129 12:02:26.804654 2366 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:02:26.804707 kubelet[2366]: E0129 12:02:26.804693 2366 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:02:26.807855 kubelet[2366]: W0129 12:02:26.807634 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.807855 kubelet[2366]: E0129 12:02:26.807679 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:26.813076 kubelet[2366]: I0129 12:02:26.813046 2366 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:02:26.813076 kubelet[2366]: I0129 12:02:26.813070 2366 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:02:26.813154 kubelet[2366]: I0129 12:02:26.813087 2366 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:02:26.889262 kubelet[2366]: I0129 12:02:26.889215 2366 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:02:26.889536 kubelet[2366]: E0129 12:02:26.889500 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 29 12:02:26.905740 kubelet[2366]: E0129 12:02:26.905703 2366 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:02:26.990098 kubelet[2366]: E0129 12:02:26.990057 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Jan 29 12:02:27.091398 kubelet[2366]: I0129 12:02:27.091298 2366 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:02:27.091636 kubelet[2366]: E0129 12:02:27.091607 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 29 12:02:27.106770 kubelet[2366]: E0129 12:02:27.106733 2366 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:02:27.390801 kubelet[2366]: E0129 12:02:27.390671 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Jan 29 12:02:27.493154 kubelet[2366]: I0129 12:02:27.493115 2366 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:02:27.493434 kubelet[2366]: E0129 12:02:27.493397 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 29 12:02:27.507551 kubelet[2366]: E0129 12:02:27.507515 2366 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:02:27.697932 kubelet[2366]: W0129 12:02:27.697770 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:27.697932 kubelet[2366]: E0129 12:02:27.697869 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:27.702781 kubelet[2366]: I0129 12:02:27.702750 2366 policy_none.go:49] "None policy: Start" Jan 29 12:02:27.703598 kubelet[2366]: I0129 12:02:27.703565 2366 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:02:27.703637 kubelet[2366]: I0129 12:02:27.703604 2366 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:02:27.742851 kubelet[2366]: I0129 12:02:27.742795 2366 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:02:27.743044 kubelet[2366]: I0129 12:02:27.743001 2366 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:02:27.743121 kubelet[2366]: I0129 12:02:27.743107 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:02:27.744730 kubelet[2366]: E0129 12:02:27.744711 2366 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 12:02:28.106546 kubelet[2366]: W0129 12:02:28.106411 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:28.106546 kubelet[2366]: E0129 12:02:28.106459 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:28.192135 kubelet[2366]: E0129 12:02:28.192076 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Jan 29 12:02:28.209643 kubelet[2366]: W0129 12:02:28.209595 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:28.209643 kubelet[2366]: E0129 12:02:28.209646 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:28.268313 kubelet[2366]: W0129 12:02:28.268238 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:28.268313 kubelet[2366]: E0129 12:02:28.268310 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:28.294628 kubelet[2366]: I0129 12:02:28.294609 2366 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:02:28.294858 kubelet[2366]: E0129 12:02:28.294834 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 29 12:02:28.308230 kubelet[2366]: I0129 12:02:28.308184 2366 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:02:28.308951 kubelet[2366]: I0129 12:02:28.308927 2366 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:02:28.309672 kubelet[2366]: I0129 12:02:28.309629 2366 topology_manager.go:215] "Topology Admit Handler" podUID="3c668ceea0ae18ab014f01387328ef48" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:02:28.397498 kubelet[2366]: I0129 12:02:28.397383 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:28.397498 kubelet[2366]: I0129 12:02:28.397418 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:28.397498 kubelet[2366]: I0129 12:02:28.397435 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:28.397498 kubelet[2366]: I0129 12:02:28.397451 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:02:28.397498 kubelet[2366]: I0129 12:02:28.397467 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c668ceea0ae18ab014f01387328ef48-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c668ceea0ae18ab014f01387328ef48\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:02:28.397725 kubelet[2366]: I0129 12:02:28.397484 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:28.397725 kubelet[2366]: I0129 12:02:28.397497 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:28.397725 kubelet[2366]: I0129 12:02:28.397534 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c668ceea0ae18ab014f01387328ef48-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c668ceea0ae18ab014f01387328ef48\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:02:28.397725 kubelet[2366]: I0129 12:02:28.397575 2366 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c668ceea0ae18ab014f01387328ef48-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c668ceea0ae18ab014f01387328ef48\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:02:28.612499 kubelet[2366]: E0129 12:02:28.612454 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:28.613206 containerd[1571]: time="2025-01-29T12:02:28.613164661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:28.614302 kubelet[2366]: E0129 12:02:28.614268 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:28.614612 containerd[1571]: time="2025-01-29T12:02:28.614585135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c668ceea0ae18ab014f01387328ef48,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:28.615775 kubelet[2366]: E0129 12:02:28.615752 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:28.616070 containerd[1571]: time="2025-01-29T12:02:28.616038570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:28.883611 kubelet[2366]: E0129 12:02:28.883515 2366 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:29.792724 kubelet[2366]: E0129 12:02:29.792666 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="3.2s" Jan 29 12:02:29.896305 kubelet[2366]: I0129 12:02:29.896261 2366 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:02:29.896703 kubelet[2366]: E0129 12:02:29.896647 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 29 12:02:30.183356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726040490.mount: Deactivated successfully. Jan 29 12:02:30.187601 containerd[1571]: time="2025-01-29T12:02:30.187547909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:30.188663 containerd[1571]: time="2025-01-29T12:02:30.188564425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 12:02:30.195756 containerd[1571]: time="2025-01-29T12:02:30.195699965Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:30.197251 containerd[1571]: time="2025-01-29T12:02:30.197178176Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:30.198037 containerd[1571]: time="2025-01-29T12:02:30.198003474Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:30.199240 containerd[1571]: time="2025-01-29T12:02:30.199160423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:02:30.200243 containerd[1571]: time="2025-01-29T12:02:30.200184864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:02:30.201521 containerd[1571]: time="2025-01-29T12:02:30.201482577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:02:30.202459 containerd[1571]: time="2025-01-29T12:02:30.202420215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.589162369s" Jan 29 12:02:30.207221 containerd[1571]: time="2025-01-29T12:02:30.207177144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.592531606s" Jan 29 12:02:30.210479 containerd[1571]: time="2025-01-29T12:02:30.208534239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.592437219s" Jan 29 12:02:30.298578 kubelet[2366]: W0129 12:02:30.298462 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.298578 kubelet[2366]: E0129 12:02:30.298522 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.337863 kubelet[2366]: W0129 12:02:30.337796 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.337863 kubelet[2366]: E0129 12:02:30.337858 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.343844 containerd[1571]: time="2025-01-29T12:02:30.342272618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:30.343844 containerd[1571]: time="2025-01-29T12:02:30.342340656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:30.343844 containerd[1571]: time="2025-01-29T12:02:30.342355654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:30.343844 containerd[1571]: time="2025-01-29T12:02:30.343322787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346383156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346877262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346899033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346971158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346279892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346356606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346377264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:30.347451 containerd[1571]: time="2025-01-29T12:02:30.346527586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:30.403296 containerd[1571]: time="2025-01-29T12:02:30.403249640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c668ceea0ae18ab014f01387328ef48,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae17d37510efd188a05cdff681f834b4746aed9af1578ff538effb668a30b8a7\"" Jan 29 12:02:30.404501 kubelet[2366]: E0129 12:02:30.404469 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:30.407542 containerd[1571]: time="2025-01-29T12:02:30.407484531Z" level=info msg="CreateContainer within sandbox \"ae17d37510efd188a05cdff681f834b4746aed9af1578ff538effb668a30b8a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:02:30.413103 containerd[1571]: time="2025-01-29T12:02:30.413066908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"abb20355a065a89cbb0f653bf165f60361e4e4718db534562dc93932c85d74be\"" Jan 29 12:02:30.414059 kubelet[2366]: E0129 12:02:30.414025 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:30.415989 containerd[1571]: time="2025-01-29T12:02:30.415966384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca059813e90af4f9036d76335b9c5462e7c61182a5e70c5143e1a2a011118edb\"" Jan 29 12:02:30.417005 containerd[1571]: time="2025-01-29T12:02:30.416901888Z" level=info msg="CreateContainer within sandbox \"abb20355a065a89cbb0f653bf165f60361e4e4718db534562dc93932c85d74be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:02:30.417589 kubelet[2366]: E0129 12:02:30.417558 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:30.419460 containerd[1571]: time="2025-01-29T12:02:30.419426692Z" level=info msg="CreateContainer within sandbox \"ca059813e90af4f9036d76335b9c5462e7c61182a5e70c5143e1a2a011118edb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:02:30.544577 kubelet[2366]: W0129 12:02:30.544421 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.544577 kubelet[2366]: E0129 12:02:30.544492 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.776129 containerd[1571]: time="2025-01-29T12:02:30.776075228Z" level=info msg="CreateContainer within sandbox \"ae17d37510efd188a05cdff681f834b4746aed9af1578ff538effb668a30b8a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07543ad0af9f9650ee1adc5e3ae329fa1f855dcb4accd91a39fb68fb5461012e\"" Jan 29 12:02:30.776756 containerd[1571]: time="2025-01-29T12:02:30.776728523Z" level=info msg="StartContainer for \"07543ad0af9f9650ee1adc5e3ae329fa1f855dcb4accd91a39fb68fb5461012e\"" Jan 29 12:02:30.780675 containerd[1571]: time="2025-01-29T12:02:30.780631782Z" level=info msg="CreateContainer within sandbox \"abb20355a065a89cbb0f653bf165f60361e4e4718db534562dc93932c85d74be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc755d50e723bf50030dd13029e2756fe275088c1a964c0f8f735adff1f9de15\"" Jan 29 12:02:30.781630 containerd[1571]: time="2025-01-29T12:02:30.781582584Z" level=info msg="StartContainer for \"fc755d50e723bf50030dd13029e2756fe275088c1a964c0f8f735adff1f9de15\"" Jan 29 12:02:30.786502 containerd[1571]: time="2025-01-29T12:02:30.786400448Z" level=info msg="CreateContainer within sandbox \"ca059813e90af4f9036d76335b9c5462e7c61182a5e70c5143e1a2a011118edb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"09ea75b014a1d96b93c0334e221427631c5e668ec4f032ca5b5f7cc8ff4f7a6d\"" Jan 29 12:02:30.788008 containerd[1571]: time="2025-01-29T12:02:30.787037082Z" level=info msg="StartContainer for \"09ea75b014a1d96b93c0334e221427631c5e668ec4f032ca5b5f7cc8ff4f7a6d\"" Jan 29 12:02:30.794992 kubelet[2366]: W0129 12:02:30.794869 2366 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.794992 kubelet[2366]: E0129 12:02:30.794920 2366 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jan 29 12:02:30.859946 containerd[1571]: time="2025-01-29T12:02:30.859890391Z" level=info msg="StartContainer for \"fc755d50e723bf50030dd13029e2756fe275088c1a964c0f8f735adff1f9de15\" returns successfully" Jan 29 12:02:30.874203 containerd[1571]: time="2025-01-29T12:02:30.874136453Z" level=info msg="StartContainer for \"07543ad0af9f9650ee1adc5e3ae329fa1f855dcb4accd91a39fb68fb5461012e\" returns successfully" Jan 29 12:02:30.874345 containerd[1571]: time="2025-01-29T12:02:30.874169515Z" level=info msg="StartContainer for \"09ea75b014a1d96b93c0334e221427631c5e668ec4f032ca5b5f7cc8ff4f7a6d\" returns successfully" Jan 29 12:02:31.832860 kubelet[2366]: E0129 12:02:31.825295 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:31.838664 kubelet[2366]: E0129 12:02:31.838625 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:31.847696 kubelet[2366]: E0129 12:02:31.847635 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:32.254394 kubelet[2366]: E0129 12:02:32.254300 2366 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 12:02:32.617825 kubelet[2366]: E0129 12:02:32.617667 2366 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 12:02:32.847642 kubelet[2366]: E0129 12:02:32.847596 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:32.847642 kubelet[2366]: E0129 12:02:32.847605 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:32.848120 kubelet[2366]: E0129 12:02:32.847654 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:32.997104 kubelet[2366]: E0129 12:02:32.996698 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 12:02:33.061087 kubelet[2366]: E0129 12:02:33.061037 2366 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 12:02:33.099281 kubelet[2366]: I0129 12:02:33.099247 2366 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:02:33.111129 kubelet[2366]: I0129 12:02:33.111087 2366 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:02:33.117492 kubelet[2366]: E0129 12:02:33.117453 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.217788 kubelet[2366]: E0129 12:02:33.217743 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.318674 kubelet[2366]: E0129 12:02:33.318549 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.419114 kubelet[2366]: E0129 12:02:33.419073 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.519710 kubelet[2366]: E0129 12:02:33.519665 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.620653 kubelet[2366]: E0129 12:02:33.620503 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.720926 kubelet[2366]: E0129 12:02:33.720796 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.821535 kubelet[2366]: E0129 12:02:33.821487 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.849519 kubelet[2366]: E0129 12:02:33.849490 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:33.901984 systemd[1]: Reloading requested from client PID 2636 ('systemctl') (unit session-7.scope)... Jan 29 12:02:33.902000 systemd[1]: Reloading... Jan 29 12:02:33.922254 kubelet[2366]: E0129 12:02:33.922215 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:33.966838 zram_generator::config[2681]: No configuration found. Jan 29 12:02:34.023307 kubelet[2366]: E0129 12:02:34.023221 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:34.077244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:02:34.123964 kubelet[2366]: E0129 12:02:34.123785 2366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:34.161932 systemd[1]: Reloading finished in 259 ms. Jan 29 12:02:34.198948 kubelet[2366]: I0129 12:02:34.198805 2366 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:02:34.198930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:34.216661 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:02:34.217217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:34.230421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:34.383984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:34.389743 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:02:34.432034 kubelet[2730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:02:34.432034 kubelet[2730]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:02:34.432034 kubelet[2730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:02:34.432418 kubelet[2730]: I0129 12:02:34.432014 2730 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:02:34.437028 kubelet[2730]: I0129 12:02:34.436997 2730 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:02:34.437028 kubelet[2730]: I0129 12:02:34.437019 2730 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:02:34.437216 kubelet[2730]: I0129 12:02:34.437199 2730 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:02:34.438606 kubelet[2730]: I0129 12:02:34.438576 2730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:02:34.439907 kubelet[2730]: I0129 12:02:34.439871 2730 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:02:34.450284 kubelet[2730]: I0129 12:02:34.450227 2730 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:02:34.450990 kubelet[2730]: I0129 12:02:34.450946 2730 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:02:34.451154 kubelet[2730]: I0129 12:02:34.450975 2730 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:02:34.451154 kubelet[2730]: I0129 12:02:34.451154 2730 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:02:34.451154 kubelet[2730]: I0129 12:02:34.451172 2730 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:02:34.451338 kubelet[2730]: I0129 12:02:34.451214 2730 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:02:34.451361 kubelet[2730]: I0129 12:02:34.451338 2730 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:02:34.451361 kubelet[2730]: I0129 12:02:34.451352 2730 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:02:34.451398 kubelet[2730]: I0129 12:02:34.451377 2730 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:02:34.451398 kubelet[2730]: I0129 12:02:34.451396 2730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:02:34.452472 kubelet[2730]: I0129 12:02:34.452152 2730 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:02:34.452472 kubelet[2730]: I0129 12:02:34.452321 2730 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:02:34.452690 kubelet[2730]: I0129 12:02:34.452662 2730 server.go:1264] "Started kubelet" Jan 29 12:02:34.455847 kubelet[2730]: I0129 12:02:34.453026 2730 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:02:34.455847 kubelet[2730]: I0129 12:02:34.453866 2730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:02:34.455847 kubelet[2730]: I0129 12:02:34.454107 2730 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:02:34.455847 kubelet[2730]: I0129 12:02:34.454218 2730 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:02:34.456311 kubelet[2730]: I0129 12:02:34.456299 2730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:02:34.459907 kubelet[2730]: E0129 12:02:34.459879 2730 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:02:34.459964 kubelet[2730]: I0129 12:02:34.459931 2730 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:02:34.461075 kubelet[2730]: I0129 12:02:34.460027 2730 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:02:34.461486 kubelet[2730]: I0129 12:02:34.461444 2730 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:02:34.464923 kubelet[2730]: I0129 12:02:34.464491 2730 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:02:34.464923 kubelet[2730]: I0129 12:02:34.464605 2730 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:02:34.466148 kubelet[2730]: E0129 12:02:34.466103 2730 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:02:34.466234 kubelet[2730]: I0129 12:02:34.466197 2730 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:02:34.475656 kubelet[2730]: I0129 12:02:34.475523 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:02:34.477850 kubelet[2730]: I0129 12:02:34.477780 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:02:34.478300 kubelet[2730]: I0129 12:02:34.478232 2730 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:02:34.478300 kubelet[2730]: I0129 12:02:34.478260 2730 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:02:34.478590 kubelet[2730]: E0129 12:02:34.478552 2730 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:02:34.518330 kubelet[2730]: I0129 12:02:34.518307 2730 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:02:34.518330 kubelet[2730]: I0129 12:02:34.518321 2730 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:02:34.518330 kubelet[2730]: I0129 12:02:34.518339 2730 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:02:34.518490 kubelet[2730]: I0129 12:02:34.518475 2730 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:02:34.518534 kubelet[2730]: I0129 12:02:34.518492 2730 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:02:34.518534 kubelet[2730]: I0129 12:02:34.518508 2730 policy_none.go:49] "None policy: Start" Jan 29 12:02:34.519113 kubelet[2730]: I0129 12:02:34.519093 2730 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:02:34.519183 kubelet[2730]: I0129 12:02:34.519116 2730 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:02:34.519289 kubelet[2730]: I0129 12:02:34.519270 2730 state_mem.go:75] "Updated machine memory state" Jan 29 12:02:34.520792 kubelet[2730]: I0129 12:02:34.520769 2730 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:02:34.521013 kubelet[2730]: I0129 12:02:34.520977 2730 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:02:34.521090 kubelet[2730]: I0129 12:02:34.521075 2730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:02:34.564268 kubelet[2730]: I0129 12:02:34.564233 2730 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:02:34.570153 kubelet[2730]: I0129 12:02:34.570121 2730 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 12:02:34.570245 kubelet[2730]: I0129 12:02:34.570225 2730 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:02:34.579864 kubelet[2730]: I0129 12:02:34.579439 2730 topology_manager.go:215] "Topology Admit Handler" podUID="3c668ceea0ae18ab014f01387328ef48" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:02:34.579864 kubelet[2730]: I0129 12:02:34.579538 2730 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:02:34.579864 kubelet[2730]: I0129 12:02:34.579598 2730 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:02:34.763070 kubelet[2730]: I0129 12:02:34.762920 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c668ceea0ae18ab014f01387328ef48-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c668ceea0ae18ab014f01387328ef48\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:02:34.763070 kubelet[2730]: I0129 12:02:34.762979 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:34.763070 kubelet[2730]: I0129 12:02:34.763026 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:34.763070 kubelet[2730]: I0129 12:02:34.763057 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c668ceea0ae18ab014f01387328ef48-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c668ceea0ae18ab014f01387328ef48\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:02:34.763288 kubelet[2730]: I0129 12:02:34.763079 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c668ceea0ae18ab014f01387328ef48-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c668ceea0ae18ab014f01387328ef48\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:02:34.763288 kubelet[2730]: I0129 12:02:34.763098 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:34.763288 kubelet[2730]: I0129 12:02:34.763118 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:34.763288 kubelet[2730]: I0129 12:02:34.763138 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:02:34.763288 kubelet[2730]: I0129 12:02:34.763182 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:02:34.890365 sudo[2764]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 12:02:34.890865 kubelet[2730]: E0129 12:02:34.890681 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:34.890899 sudo[2764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 12:02:34.890937 kubelet[2730]: E0129 12:02:34.890910 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:34.891161 kubelet[2730]: E0129 12:02:34.891140 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:35.391097 sudo[2764]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:35.452779 kubelet[2730]: I0129 12:02:35.452722 2730 apiserver.go:52] "Watching apiserver" Jan 29 12:02:35.462085 kubelet[2730]: I0129 12:02:35.462035 2730 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:02:35.484612 kubelet[2730]: I0129 12:02:35.484545 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.48452827 podStartE2EDuration="1.48452827s" podCreationTimestamp="2025-01-29 12:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:35.484521346 +0000 UTC m=+1.090753976" watchObservedRunningTime="2025-01-29 12:02:35.48452827 +0000 UTC m=+1.090760900" Jan 29 12:02:35.492688 kubelet[2730]: E0129 12:02:35.492319 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:35.492688 kubelet[2730]: E0129 12:02:35.492359 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:35.492688 kubelet[2730]: E0129 12:02:35.492672 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:35.499571 kubelet[2730]: I0129 12:02:35.499250 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.499237086 podStartE2EDuration="1.499237086s" podCreationTimestamp="2025-01-29 12:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:35.49122837 +0000 UTC m=+1.097461000" watchObservedRunningTime="2025-01-29 12:02:35.499237086 +0000 UTC m=+1.105469716" Jan 29 12:02:35.507151 kubelet[2730]: I0129 12:02:35.507097 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5070753159999999 podStartE2EDuration="1.507075316s" podCreationTimestamp="2025-01-29 12:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:35.49973 +0000 UTC m=+1.105962630" watchObservedRunningTime="2025-01-29 12:02:35.507075316 +0000 UTC m=+1.113307946" Jan 29 12:02:36.492977 kubelet[2730]: E0129 12:02:36.492858 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:36.492977 kubelet[2730]: E0129 12:02:36.492927 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:36.822136 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:36.824123 sshd[1756]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:36.828692 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:43142.service: Deactivated successfully. Jan 29 12:02:36.830929 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:02:36.831091 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:02:36.832302 systemd-logind[1547]: Removed session 7. Jan 29 12:02:37.494080 kubelet[2730]: E0129 12:02:37.494035 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:40.102890 kubelet[2730]: E0129 12:02:40.102831 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:40.497367 kubelet[2730]: E0129 12:02:40.497335 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:45.468093 update_engine[1551]: I20250129 12:02:45.468011 1551 update_attempter.cc:509] Updating boot flags... Jan 29 12:02:45.511187 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2814) Jan 29 12:02:45.561886 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2818) Jan 29 12:02:45.595837 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2818) Jan 29 12:02:46.002485 kubelet[2730]: E0129 12:02:46.002446 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:46.252274 kubelet[2730]: E0129 12:02:46.252222 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:48.379911 kubelet[2730]: I0129 12:02:48.379873 2730 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:02:48.380412 kubelet[2730]: I0129 12:02:48.380394 2730 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:02:48.380466 containerd[1571]: time="2025-01-29T12:02:48.380218900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:02:48.903919 kubelet[2730]: I0129 12:02:48.903791 2730 topology_manager.go:215] "Topology Admit Handler" podUID="f3994d13-32ef-43f4-b485-e2c53bdbc41f" podNamespace="kube-system" podName="kube-proxy-7c5wf" Jan 29 12:02:48.912198 kubelet[2730]: I0129 12:02:48.910849 2730 topology_manager.go:215] "Topology Admit Handler" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" podNamespace="kube-system" podName="cilium-lv2zq" Jan 29 12:02:48.954798 kubelet[2730]: I0129 12:02:48.954743 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cni-path\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.954798 kubelet[2730]: I0129 12:02:48.954785 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-run\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.954798 kubelet[2730]: I0129 12:02:48.954801 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hostproc\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955002 kubelet[2730]: I0129 12:02:48.954845 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vhv\" (UniqueName: \"kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-kube-api-access-v7vhv\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955002 kubelet[2730]: I0129 12:02:48.954862 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3994d13-32ef-43f4-b485-e2c53bdbc41f-xtables-lock\") pod \"kube-proxy-7c5wf\" (UID: \"f3994d13-32ef-43f4-b485-e2c53bdbc41f\") " pod="kube-system/kube-proxy-7c5wf" Jan 29 12:02:48.955002 kubelet[2730]: I0129 12:02:48.954876 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pbv\" (UniqueName: \"kubernetes.io/projected/f3994d13-32ef-43f4-b485-e2c53bdbc41f-kube-api-access-t5pbv\") pod \"kube-proxy-7c5wf\" (UID: \"f3994d13-32ef-43f4-b485-e2c53bdbc41f\") " pod="kube-system/kube-proxy-7c5wf" Jan 29 12:02:48.955002 kubelet[2730]: I0129 12:02:48.954894 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-lib-modules\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955002 kubelet[2730]: I0129 12:02:48.954907 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-clustermesh-secrets\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955105 kubelet[2730]: I0129 12:02:48.954920 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3994d13-32ef-43f4-b485-e2c53bdbc41f-kube-proxy\") pod \"kube-proxy-7c5wf\" (UID: \"f3994d13-32ef-43f4-b485-e2c53bdbc41f\") " pod="kube-system/kube-proxy-7c5wf" Jan 29 12:02:48.955105 kubelet[2730]: I0129 12:02:48.954934 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-cgroup\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955105 kubelet[2730]: I0129 12:02:48.954949 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-net\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955105 kubelet[2730]: I0129 12:02:48.954974 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hubble-tls\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955105 kubelet[2730]: I0129 12:02:48.954989 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-bpf-maps\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955105 kubelet[2730]: I0129 12:02:48.955046 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-xtables-lock\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955221 kubelet[2730]: I0129 12:02:48.955085 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-config-path\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955221 kubelet[2730]: I0129 12:02:48.955109 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-kernel\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:48.955221 kubelet[2730]: I0129 12:02:48.955127 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3994d13-32ef-43f4-b485-e2c53bdbc41f-lib-modules\") pod \"kube-proxy-7c5wf\" (UID: \"f3994d13-32ef-43f4-b485-e2c53bdbc41f\") " pod="kube-system/kube-proxy-7c5wf" Jan 29 12:02:48.955221 kubelet[2730]: I0129 12:02:48.955144 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-etc-cni-netd\") pod \"cilium-lv2zq\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " pod="kube-system/cilium-lv2zq" Jan 29 12:02:49.500491 kubelet[2730]: I0129 12:02:49.499922 2730 topology_manager.go:215] "Topology Admit Handler" podUID="bedbb8f7-003d-4c64-a530-79785223949b" podNamespace="kube-system" podName="cilium-operator-599987898-pl5mn" Jan 29 12:02:49.509491 kubelet[2730]: E0129 12:02:49.507905 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:49.510626 containerd[1571]: time="2025-01-29T12:02:49.510558510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c5wf,Uid:f3994d13-32ef-43f4-b485-e2c53bdbc41f,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:49.516415 kubelet[2730]: E0129 12:02:49.516283 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:49.517584 containerd[1571]: time="2025-01-29T12:02:49.517544401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lv2zq,Uid:f1dc81e4-d50e-4420-8c0b-2ddaacff48ff,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:49.544103 containerd[1571]: time="2025-01-29T12:02:49.543970649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:49.544103 containerd[1571]: time="2025-01-29T12:02:49.544063925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:49.544241 containerd[1571]: time="2025-01-29T12:02:49.544104903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:49.544263 containerd[1571]: time="2025-01-29T12:02:49.544234618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:49.550167 containerd[1571]: time="2025-01-29T12:02:49.550059023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:49.550167 containerd[1571]: time="2025-01-29T12:02:49.550128835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:49.550167 containerd[1571]: time="2025-01-29T12:02:49.550148131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:49.550495 containerd[1571]: time="2025-01-29T12:02:49.550448820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:49.559533 kubelet[2730]: I0129 12:02:49.559497 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8k9\" (UniqueName: \"kubernetes.io/projected/bedbb8f7-003d-4c64-a530-79785223949b-kube-api-access-zk8k9\") pod \"cilium-operator-599987898-pl5mn\" (UID: \"bedbb8f7-003d-4c64-a530-79785223949b\") " pod="kube-system/cilium-operator-599987898-pl5mn" Jan 29 12:02:49.559611 kubelet[2730]: I0129 12:02:49.559540 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bedbb8f7-003d-4c64-a530-79785223949b-cilium-config-path\") pod \"cilium-operator-599987898-pl5mn\" (UID: \"bedbb8f7-003d-4c64-a530-79785223949b\") " pod="kube-system/cilium-operator-599987898-pl5mn" Jan 29 12:02:49.584365 containerd[1571]: time="2025-01-29T12:02:49.584323082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c5wf,Uid:f3994d13-32ef-43f4-b485-e2c53bdbc41f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a998402e083a2eea70801022b5965610f7b5b8ad27bf48637c260fddb3977dbd\"" Jan 29 12:02:49.586544 kubelet[2730]: E0129 12:02:49.586522 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:49.589057 containerd[1571]: time="2025-01-29T12:02:49.588985761Z" level=info msg="CreateContainer within sandbox \"a998402e083a2eea70801022b5965610f7b5b8ad27bf48637c260fddb3977dbd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:02:49.589553 containerd[1571]: time="2025-01-29T12:02:49.589526893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lv2zq,Uid:f1dc81e4-d50e-4420-8c0b-2ddaacff48ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\"" Jan 29 12:02:49.590124 kubelet[2730]: E0129 12:02:49.590105 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:49.591864 containerd[1571]: time="2025-01-29T12:02:49.591836642Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:02:49.606478 containerd[1571]: time="2025-01-29T12:02:49.606431845Z" level=info msg="CreateContainer within sandbox \"a998402e083a2eea70801022b5965610f7b5b8ad27bf48637c260fddb3977dbd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51f9c1653e964adb5e5b781918c70c9244093b3336652a29f5afbaba93cf2883\"" Jan 29 12:02:49.606981 containerd[1571]: time="2025-01-29T12:02:49.606956416Z" level=info msg="StartContainer for \"51f9c1653e964adb5e5b781918c70c9244093b3336652a29f5afbaba93cf2883\"" Jan 29 12:02:49.670858 containerd[1571]: time="2025-01-29T12:02:49.670787252Z" level=info msg="StartContainer for \"51f9c1653e964adb5e5b781918c70c9244093b3336652a29f5afbaba93cf2883\" returns successfully" Jan 29 12:02:49.812507 kubelet[2730]: E0129 12:02:49.812388 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:49.813020 containerd[1571]: time="2025-01-29T12:02:49.812986868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pl5mn,Uid:bedbb8f7-003d-4c64-a530-79785223949b,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:49.838957 containerd[1571]: time="2025-01-29T12:02:49.838653168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:49.838957 containerd[1571]: time="2025-01-29T12:02:49.838742777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:49.838957 containerd[1571]: time="2025-01-29T12:02:49.838776972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:49.839742 containerd[1571]: time="2025-01-29T12:02:49.839670080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:49.890646 containerd[1571]: time="2025-01-29T12:02:49.890586784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pl5mn,Uid:bedbb8f7-003d-4c64-a530-79785223949b,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f\"" Jan 29 12:02:49.891356 kubelet[2730]: E0129 12:02:49.891333 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:50.516514 kubelet[2730]: E0129 12:02:50.516481 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:50.525035 kubelet[2730]: I0129 12:02:50.524982 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7c5wf" podStartSLOduration=2.524963326 podStartE2EDuration="2.524963326s" podCreationTimestamp="2025-01-29 12:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:50.524717922 +0000 UTC m=+16.130950572" watchObservedRunningTime="2025-01-29 12:02:50.524963326 +0000 UTC m=+16.131195976" Jan 29 12:02:53.331025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373293190.mount: Deactivated successfully. Jan 29 12:02:55.510547 containerd[1571]: time="2025-01-29T12:02:55.510472968Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:55.511271 containerd[1571]: time="2025-01-29T12:02:55.511215819Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 12:02:55.512424 containerd[1571]: time="2025-01-29T12:02:55.512389472Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:55.514443 containerd[1571]: time="2025-01-29T12:02:55.514411867Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.922538907s" Jan 29 12:02:55.514491 containerd[1571]: time="2025-01-29T12:02:55.514444258Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 12:02:55.516728 containerd[1571]: time="2025-01-29T12:02:55.516705463Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:02:55.523519 containerd[1571]: time="2025-01-29T12:02:55.523481322Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:02:55.554360 containerd[1571]: time="2025-01-29T12:02:55.554289777Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\"" Jan 29 12:02:55.558461 containerd[1571]: time="2025-01-29T12:02:55.558408654Z" level=info msg="StartContainer for \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\"" Jan 29 12:02:55.582193 systemd[1]: run-containerd-runc-k8s.io-9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e-runc.4u0YLv.mount: Deactivated successfully. Jan 29 12:02:55.628225 containerd[1571]: time="2025-01-29T12:02:55.628176135Z" level=info msg="StartContainer for \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\" returns successfully" Jan 29 12:02:56.194379 containerd[1571]: time="2025-01-29T12:02:56.192686134Z" level=info msg="shim disconnected" id=9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e namespace=k8s.io Jan 29 12:02:56.194379 containerd[1571]: time="2025-01-29T12:02:56.194356523Z" level=warning msg="cleaning up after shim disconnected" id=9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e namespace=k8s.io Jan 29 12:02:56.194379 containerd[1571]: time="2025-01-29T12:02:56.194367393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:56.543539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e-rootfs.mount: Deactivated successfully. Jan 29 12:02:56.577862 kubelet[2730]: E0129 12:02:56.577838 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:56.583165 containerd[1571]: time="2025-01-29T12:02:56.583102155Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:02:56.638981 containerd[1571]: time="2025-01-29T12:02:56.638932291Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\"" Jan 29 12:02:56.639744 containerd[1571]: time="2025-01-29T12:02:56.639598027Z" level=info msg="StartContainer for \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\"" Jan 29 12:02:56.695521 containerd[1571]: time="2025-01-29T12:02:56.695479320Z" level=info msg="StartContainer for \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\" returns successfully" Jan 29 12:02:56.705514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:02:56.706504 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:02:56.706572 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:02:56.712213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:02:56.728169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:02:56.734568 containerd[1571]: time="2025-01-29T12:02:56.734505266Z" level=info msg="shim disconnected" id=f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3 namespace=k8s.io Jan 29 12:02:56.734568 containerd[1571]: time="2025-01-29T12:02:56.734553837Z" level=warning msg="cleaning up after shim disconnected" id=f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3 namespace=k8s.io Jan 29 12:02:56.734568 containerd[1571]: time="2025-01-29T12:02:56.734561832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:57.542793 kubelet[2730]: E0129 12:02:57.542759 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:57.544102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3-rootfs.mount: Deactivated successfully. Jan 29 12:02:57.545188 containerd[1571]: time="2025-01-29T12:02:57.545135588Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:02:57.559609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008026584.mount: Deactivated successfully. Jan 29 12:02:57.563242 containerd[1571]: time="2025-01-29T12:02:57.563207909Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\"" Jan 29 12:02:57.563561 containerd[1571]: time="2025-01-29T12:02:57.563534855Z" level=info msg="StartContainer for \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\"" Jan 29 12:02:57.619271 containerd[1571]: time="2025-01-29T12:02:57.619236563Z" level=info msg="StartContainer for \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\" returns successfully" Jan 29 12:02:57.643274 containerd[1571]: time="2025-01-29T12:02:57.643219483Z" level=info msg="shim disconnected" id=c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6 namespace=k8s.io Jan 29 12:02:57.643274 containerd[1571]: time="2025-01-29T12:02:57.643270769Z" level=warning msg="cleaning up after shim disconnected" id=c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6 namespace=k8s.io Jan 29 12:02:57.643474 containerd[1571]: time="2025-01-29T12:02:57.643281159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:58.419915 systemd-resolved[1459]: Under memory pressure, flushing caches. Jan 29 12:02:58.419965 systemd-resolved[1459]: Flushed all caches. Jan 29 12:02:58.421837 systemd-journald[1155]: Under memory pressure, flushing caches. Jan 29 12:02:58.543660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6-rootfs.mount: Deactivated successfully. Jan 29 12:02:58.546374 kubelet[2730]: E0129 12:02:58.546351 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:58.548087 containerd[1571]: time="2025-01-29T12:02:58.548038688Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:02:58.564009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121632935.mount: Deactivated successfully. Jan 29 12:02:58.564599 containerd[1571]: time="2025-01-29T12:02:58.564561899Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\"" Jan 29 12:02:58.565447 containerd[1571]: time="2025-01-29T12:02:58.565332531Z" level=info msg="StartContainer for \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\"" Jan 29 12:02:58.621518 containerd[1571]: time="2025-01-29T12:02:58.621473543Z" level=info msg="StartContainer for \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\" returns successfully" Jan 29 12:02:58.643784 containerd[1571]: time="2025-01-29T12:02:58.643715949Z" level=info msg="shim disconnected" id=86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f namespace=k8s.io Jan 29 12:02:58.643784 containerd[1571]: time="2025-01-29T12:02:58.643782444Z" level=warning msg="cleaning up after shim disconnected" id=86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f namespace=k8s.io Jan 29 12:02:58.643784 containerd[1571]: time="2025-01-29T12:02:58.643793074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:59.543583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f-rootfs.mount: Deactivated successfully. Jan 29 12:02:59.550592 kubelet[2730]: E0129 12:02:59.550562 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:02:59.552965 containerd[1571]: time="2025-01-29T12:02:59.552897566Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:02:59.647665 containerd[1571]: time="2025-01-29T12:02:59.647612359Z" level=info msg="CreateContainer within sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\"" Jan 29 12:02:59.648064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290192968.mount: Deactivated successfully. Jan 29 12:02:59.648380 containerd[1571]: time="2025-01-29T12:02:59.648232067Z" level=info msg="StartContainer for \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\"" Jan 29 12:02:59.707302 containerd[1571]: time="2025-01-29T12:02:59.707259104Z" level=info msg="StartContainer for \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\" returns successfully" Jan 29 12:02:59.804750 kubelet[2730]: I0129 12:02:59.804320 2730 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:02:59.884180 kubelet[2730]: I0129 12:02:59.884134 2730 topology_manager.go:215] "Topology Admit Handler" podUID="bbb85fa5-0035-4973-ae08-ccdc81106dd2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qqhxq" Jan 29 12:02:59.932329 kubelet[2730]: I0129 12:02:59.932274 2730 topology_manager.go:215] "Topology Admit Handler" podUID="de8c0d21-a94c-40eb-b5db-96ba82f66304" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v64rz" Jan 29 12:03:00.032245 kubelet[2730]: I0129 12:03:00.032198 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbb85fa5-0035-4973-ae08-ccdc81106dd2-config-volume\") pod \"coredns-7db6d8ff4d-qqhxq\" (UID: \"bbb85fa5-0035-4973-ae08-ccdc81106dd2\") " pod="kube-system/coredns-7db6d8ff4d-qqhxq" Jan 29 12:03:00.032245 kubelet[2730]: I0129 12:03:00.032291 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r92n\" (UniqueName: \"kubernetes.io/projected/bbb85fa5-0035-4973-ae08-ccdc81106dd2-kube-api-access-4r92n\") pod \"coredns-7db6d8ff4d-qqhxq\" (UID: \"bbb85fa5-0035-4973-ae08-ccdc81106dd2\") " pod="kube-system/coredns-7db6d8ff4d-qqhxq" Jan 29 12:03:00.133162 kubelet[2730]: I0129 12:03:00.132784 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de8c0d21-a94c-40eb-b5db-96ba82f66304-config-volume\") pod \"coredns-7db6d8ff4d-v64rz\" (UID: \"de8c0d21-a94c-40eb-b5db-96ba82f66304\") " pod="kube-system/coredns-7db6d8ff4d-v64rz" Jan 29 12:03:00.133162 kubelet[2730]: I0129 12:03:00.132851 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvmr\" (UniqueName: \"kubernetes.io/projected/de8c0d21-a94c-40eb-b5db-96ba82f66304-kube-api-access-gpvmr\") pod \"coredns-7db6d8ff4d-v64rz\" (UID: \"de8c0d21-a94c-40eb-b5db-96ba82f66304\") " pod="kube-system/coredns-7db6d8ff4d-v64rz" Jan 29 12:03:00.188673 kubelet[2730]: E0129 12:03:00.188634 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:00.189367 containerd[1571]: time="2025-01-29T12:03:00.189314479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qqhxq,Uid:bbb85fa5-0035-4973-ae08-ccdc81106dd2,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:00.540356 kubelet[2730]: E0129 12:03:00.540314 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:00.541277 containerd[1571]: time="2025-01-29T12:03:00.541208330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v64rz,Uid:de8c0d21-a94c-40eb-b5db-96ba82f66304,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:00.555457 kubelet[2730]: E0129 12:03:00.555423 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:01.166263 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:37730.service - OpenSSH per-connection server daemon (10.0.0.1:37730). Jan 29 12:03:01.193698 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 37730 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:01.195300 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:01.199902 systemd-logind[1547]: New session 8 of user core. Jan 29 12:03:01.207301 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:03:01.332475 sshd[3514]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:01.337152 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:37730.service: Deactivated successfully. Jan 29 12:03:01.339326 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:03:01.339415 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:03:01.340971 systemd-logind[1547]: Removed session 8. Jan 29 12:03:01.556986 kubelet[2730]: E0129 12:03:01.556938 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:02.558447 kubelet[2730]: E0129 12:03:02.558399 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:03.474850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659433889.mount: Deactivated successfully. Jan 29 12:03:03.837838 containerd[1571]: time="2025-01-29T12:03:03.837725409Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:03.838489 containerd[1571]: time="2025-01-29T12:03:03.838454702Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 12:03:03.839624 containerd[1571]: time="2025-01-29T12:03:03.839580339Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:03.840947 containerd[1571]: time="2025-01-29T12:03:03.840919689Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.324185422s" Jan 29 12:03:03.840987 containerd[1571]: time="2025-01-29T12:03:03.840950026Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 12:03:03.842638 containerd[1571]: time="2025-01-29T12:03:03.842607345Z" level=info msg="CreateContainer within sandbox \"87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:03:03.935997 containerd[1571]: time="2025-01-29T12:03:03.935950536Z" level=info msg="CreateContainer within sandbox \"87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\"" Jan 29 12:03:03.936503 containerd[1571]: time="2025-01-29T12:03:03.936472658Z" level=info msg="StartContainer for \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\"" Jan 29 12:03:04.000025 containerd[1571]: time="2025-01-29T12:03:03.999986032Z" level=info msg="StartContainer for \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\" returns successfully" Jan 29 12:03:04.563397 kubelet[2730]: E0129 12:03:04.563122 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:04.643550 kubelet[2730]: I0129 12:03:04.643488 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lv2zq" podStartSLOduration=10.717962527 podStartE2EDuration="16.643467715s" podCreationTimestamp="2025-01-29 12:02:48 +0000 UTC" firstStartedPulling="2025-01-29 12:02:49.590499703 +0000 UTC m=+15.196732333" lastFinishedPulling="2025-01-29 12:02:55.516004891 +0000 UTC m=+21.122237521" observedRunningTime="2025-01-29 12:03:00.747843355 +0000 UTC m=+26.354076015" watchObservedRunningTime="2025-01-29 12:03:04.643467715 +0000 UTC m=+30.249700335" Jan 29 12:03:04.643725 kubelet[2730]: I0129 12:03:04.643643 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pl5mn" podStartSLOduration=1.694334221 podStartE2EDuration="15.643638667s" podCreationTimestamp="2025-01-29 12:02:49 +0000 UTC" firstStartedPulling="2025-01-29 12:02:49.892199914 +0000 UTC m=+15.498432544" lastFinishedPulling="2025-01-29 12:03:03.84150437 +0000 UTC m=+29.447736990" observedRunningTime="2025-01-29 12:03:04.643319867 +0000 UTC m=+30.249552497" watchObservedRunningTime="2025-01-29 12:03:04.643638667 +0000 UTC m=+30.249871287" Jan 29 12:03:05.564696 kubelet[2730]: E0129 12:03:05.564650 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:06.345270 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:37736.service - OpenSSH per-connection server daemon (10.0.0.1:37736). Jan 29 12:03:06.377910 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 37736 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:06.379447 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:06.384097 systemd-logind[1547]: New session 9 of user core. Jan 29 12:03:06.406136 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:03:06.525169 sshd[3588]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:06.529857 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:37736.service: Deactivated successfully. Jan 29 12:03:06.532353 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:03:06.532410 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:03:06.533598 systemd-logind[1547]: Removed session 9. Jan 29 12:03:07.960015 systemd-networkd[1243]: cilium_host: Link UP Jan 29 12:03:07.960678 systemd-networkd[1243]: cilium_net: Link UP Jan 29 12:03:07.961219 systemd-networkd[1243]: cilium_net: Gained carrier Jan 29 12:03:07.961952 systemd-networkd[1243]: cilium_host: Gained carrier Jan 29 12:03:08.063961 systemd-networkd[1243]: cilium_vxlan: Link UP Jan 29 12:03:08.064640 systemd-networkd[1243]: cilium_vxlan: Gained carrier Jan 29 12:03:08.259954 systemd-networkd[1243]: cilium_net: Gained IPv6LL Jan 29 12:03:08.270845 kernel: NET: Registered PF_ALG protocol family Jan 29 12:03:08.788234 systemd-networkd[1243]: cilium_host: Gained IPv6LL Jan 29 12:03:08.895354 systemd-networkd[1243]: lxc_health: Link UP Jan 29 12:03:08.905273 systemd-networkd[1243]: lxc_health: Gained carrier Jan 29 12:03:09.161515 systemd-networkd[1243]: lxcbdd37a0ae861: Link UP Jan 29 12:03:09.170932 kernel: eth0: renamed from tmp87652 Jan 29 12:03:09.175691 systemd-networkd[1243]: cilium_vxlan: Gained IPv6LL Jan 29 12:03:09.176145 systemd-networkd[1243]: lxcbdd37a0ae861: Gained carrier Jan 29 12:03:09.260844 kernel: eth0: renamed from tmp5d640 Jan 29 12:03:09.263183 systemd-networkd[1243]: lxc000165cfad75: Link UP Jan 29 12:03:09.266991 systemd-networkd[1243]: lxc000165cfad75: Gained carrier Jan 29 12:03:09.520576 kubelet[2730]: E0129 12:03:09.520539 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:09.571826 kubelet[2730]: E0129 12:03:09.571788 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:10.452056 systemd-networkd[1243]: lxcbdd37a0ae861: Gained IPv6LL Jan 29 12:03:10.573321 kubelet[2730]: E0129 12:03:10.573289 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:10.899933 systemd-networkd[1243]: lxc_health: Gained IPv6LL Jan 29 12:03:11.091989 systemd-networkd[1243]: lxc000165cfad75: Gained IPv6LL Jan 29 12:03:11.541142 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:40868.service - OpenSSH per-connection server daemon (10.0.0.1:40868). Jan 29 12:03:11.571260 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 40868 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:11.572863 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:11.576896 systemd-logind[1547]: New session 10 of user core. Jan 29 12:03:11.583119 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:03:11.700322 sshd[3986]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:11.703978 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:40868.service: Deactivated successfully. Jan 29 12:03:11.706242 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:03:11.706271 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:03:11.707772 systemd-logind[1547]: Removed session 10. Jan 29 12:03:12.729356 containerd[1571]: time="2025-01-29T12:03:12.728587357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:12.730154 containerd[1571]: time="2025-01-29T12:03:12.729338409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:12.730154 containerd[1571]: time="2025-01-29T12:03:12.729363576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:12.730154 containerd[1571]: time="2025-01-29T12:03:12.729611021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:12.733836 containerd[1571]: time="2025-01-29T12:03:12.730677705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:12.733836 containerd[1571]: time="2025-01-29T12:03:12.730772373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:12.733836 containerd[1571]: time="2025-01-29T12:03:12.730793343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:12.733836 containerd[1571]: time="2025-01-29T12:03:12.730959264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:12.759491 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:03:12.760005 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:03:12.788302 containerd[1571]: time="2025-01-29T12:03:12.787743463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qqhxq,Uid:bbb85fa5-0035-4973-ae08-ccdc81106dd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d64049a3ef6eaac538394aca4e07295a7be2de151d544dcce0325c79027bbe1\"" Jan 29 12:03:12.788520 kubelet[2730]: E0129 12:03:12.788498 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:12.789230 containerd[1571]: time="2025-01-29T12:03:12.789198125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v64rz,Uid:de8c0d21-a94c-40eb-b5db-96ba82f66304,Namespace:kube-system,Attempt:0,} returns sandbox id \"8765204e6fa61d81ef98f9df45339c57486d6690d116c6bb617c13441bfc007d\"" Jan 29 12:03:12.791083 kubelet[2730]: E0129 12:03:12.791059 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:12.793036 containerd[1571]: time="2025-01-29T12:03:12.792997122Z" level=info msg="CreateContainer within sandbox \"8765204e6fa61d81ef98f9df45339c57486d6690d116c6bb617c13441bfc007d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:03:12.813992 containerd[1571]: time="2025-01-29T12:03:12.813943363Z" level=info msg="CreateContainer within sandbox \"5d64049a3ef6eaac538394aca4e07295a7be2de151d544dcce0325c79027bbe1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:03:13.263772 containerd[1571]: time="2025-01-29T12:03:13.263706209Z" level=info msg="CreateContainer within sandbox \"5d64049a3ef6eaac538394aca4e07295a7be2de151d544dcce0325c79027bbe1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62fca4927a01878123c6a805b6484ad0857f3c35c5537da3e9ba294c8eff9ac8\"" Jan 29 12:03:13.264339 containerd[1571]: time="2025-01-29T12:03:13.264298892Z" level=info msg="StartContainer for \"62fca4927a01878123c6a805b6484ad0857f3c35c5537da3e9ba294c8eff9ac8\"" Jan 29 12:03:13.265010 containerd[1571]: time="2025-01-29T12:03:13.264978218Z" level=info msg="CreateContainer within sandbox \"8765204e6fa61d81ef98f9df45339c57486d6690d116c6bb617c13441bfc007d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34462394f716360487ffd9f26fd2806689b71f1ad0a0d68152cb394b1b7c38b3\"" Jan 29 12:03:13.265534 containerd[1571]: time="2025-01-29T12:03:13.265494628Z" level=info msg="StartContainer for \"34462394f716360487ffd9f26fd2806689b71f1ad0a0d68152cb394b1b7c38b3\"" Jan 29 12:03:13.324617 containerd[1571]: time="2025-01-29T12:03:13.324549248Z" level=info msg="StartContainer for \"34462394f716360487ffd9f26fd2806689b71f1ad0a0d68152cb394b1b7c38b3\" returns successfully" Jan 29 12:03:13.324617 containerd[1571]: time="2025-01-29T12:03:13.324586468Z" level=info msg="StartContainer for \"62fca4927a01878123c6a805b6484ad0857f3c35c5537da3e9ba294c8eff9ac8\" returns successfully" Jan 29 12:03:13.581176 kubelet[2730]: E0129 12:03:13.580726 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:13.586116 kubelet[2730]: E0129 12:03:13.586077 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:13.589823 kubelet[2730]: I0129 12:03:13.589745 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v64rz" podStartSLOduration=24.589735604 podStartE2EDuration="24.589735604s" podCreationTimestamp="2025-01-29 12:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:13.58940319 +0000 UTC m=+39.195635820" watchObservedRunningTime="2025-01-29 12:03:13.589735604 +0000 UTC m=+39.195968234" Jan 29 12:03:13.601034 kubelet[2730]: I0129 12:03:13.599549 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qqhxq" podStartSLOduration=24.599532277 podStartE2EDuration="24.599532277s" podCreationTimestamp="2025-01-29 12:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:13.599525514 +0000 UTC m=+39.205758165" watchObservedRunningTime="2025-01-29 12:03:13.599532277 +0000 UTC m=+39.205764907" Jan 29 12:03:14.585272 kubelet[2730]: E0129 12:03:14.585231 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:14.585750 kubelet[2730]: E0129 12:03:14.585514 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:15.586722 kubelet[2730]: E0129 12:03:15.586670 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:16.710058 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:40882.service - OpenSSH per-connection server daemon (10.0.0.1:40882). Jan 29 12:03:16.739579 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 40882 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:16.741198 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:16.745094 systemd-logind[1547]: New session 11 of user core. Jan 29 12:03:16.754094 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:03:16.872090 sshd[4173]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:16.876960 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:40882.service: Deactivated successfully. Jan 29 12:03:16.879768 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:03:16.879876 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:03:16.881289 systemd-logind[1547]: Removed session 11. Jan 29 12:03:21.890128 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:47206.service - OpenSSH per-connection server daemon (10.0.0.1:47206). Jan 29 12:03:21.916275 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 47206 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:21.917914 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:21.922000 systemd-logind[1547]: New session 12 of user core. Jan 29 12:03:21.935106 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:03:22.048012 sshd[4191]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:22.054020 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:47214.service - OpenSSH per-connection server daemon (10.0.0.1:47214). Jan 29 12:03:22.054476 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:47206.service: Deactivated successfully. Jan 29 12:03:22.057922 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:03:22.058554 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:03:22.059525 systemd-logind[1547]: Removed session 12. Jan 29 12:03:22.081603 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 47214 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:22.083194 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:22.087524 systemd-logind[1547]: New session 13 of user core. Jan 29 12:03:22.097161 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:03:22.244959 sshd[4205]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:22.257114 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:47228.service - OpenSSH per-connection server daemon (10.0.0.1:47228). Jan 29 12:03:22.257712 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:47214.service: Deactivated successfully. Jan 29 12:03:22.259884 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:03:22.261911 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:03:22.263300 systemd-logind[1547]: Removed session 13. Jan 29 12:03:22.287033 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 47228 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:22.288831 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:22.293540 systemd-logind[1547]: New session 14 of user core. Jan 29 12:03:22.304389 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:03:22.416649 sshd[4218]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:22.420958 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:47228.service: Deactivated successfully. Jan 29 12:03:22.423435 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:03:22.424264 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:03:22.425333 systemd-logind[1547]: Removed session 14. Jan 29 12:03:27.427027 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:47242.service - OpenSSH per-connection server daemon (10.0.0.1:47242). Jan 29 12:03:27.453339 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 47242 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:27.455119 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:27.459158 systemd-logind[1547]: New session 15 of user core. Jan 29 12:03:27.469057 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:03:27.576494 sshd[4236]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:27.580033 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:47242.service: Deactivated successfully. Jan 29 12:03:27.582283 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:03:27.583083 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:03:27.583908 systemd-logind[1547]: Removed session 15. Jan 29 12:03:32.587037 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:47344.service - OpenSSH per-connection server daemon (10.0.0.1:47344). Jan 29 12:03:32.612962 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 47344 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:32.614493 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:32.618591 systemd-logind[1547]: New session 16 of user core. Jan 29 12:03:32.627093 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:03:32.731930 sshd[4251]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:32.742025 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:47352.service - OpenSSH per-connection server daemon (10.0.0.1:47352). Jan 29 12:03:32.742507 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:47344.service: Deactivated successfully. Jan 29 12:03:32.745561 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:03:32.746282 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:03:32.747349 systemd-logind[1547]: Removed session 16. Jan 29 12:03:32.772218 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 47352 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:32.774237 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:32.778756 systemd-logind[1547]: New session 17 of user core. Jan 29 12:03:32.789168 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:03:32.980057 sshd[4263]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:32.992190 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:47358.service - OpenSSH per-connection server daemon (10.0.0.1:47358). Jan 29 12:03:32.992854 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:47352.service: Deactivated successfully. Jan 29 12:03:32.996182 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:03:32.996885 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:03:32.998004 systemd-logind[1547]: Removed session 17. Jan 29 12:03:33.025918 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 47358 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:33.027424 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:33.031784 systemd-logind[1547]: New session 18 of user core. Jan 29 12:03:33.041183 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:03:34.371383 sshd[4276]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:34.380292 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:47366.service - OpenSSH per-connection server daemon (10.0.0.1:47366). Jan 29 12:03:34.384119 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:47358.service: Deactivated successfully. Jan 29 12:03:34.386915 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:03:34.390152 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:03:34.391959 systemd-logind[1547]: Removed session 18. Jan 29 12:03:34.413767 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 47366 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:34.415466 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:34.419917 systemd-logind[1547]: New session 19 of user core. Jan 29 12:03:34.429097 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:03:34.650770 sshd[4295]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:34.659116 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:47380.service - OpenSSH per-connection server daemon (10.0.0.1:47380). Jan 29 12:03:34.659664 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:47366.service: Deactivated successfully. Jan 29 12:03:34.662574 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:03:34.663285 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:03:34.664331 systemd-logind[1547]: Removed session 19. Jan 29 12:03:34.689227 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 47380 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:34.690356 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:34.694796 systemd-logind[1547]: New session 20 of user core. Jan 29 12:03:34.699095 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:03:34.807511 sshd[4312]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:34.811755 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:47380.service: Deactivated successfully. Jan 29 12:03:34.814736 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:03:34.815550 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:03:34.816373 systemd-logind[1547]: Removed session 20. Jan 29 12:03:39.818087 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:47390.service - OpenSSH per-connection server daemon (10.0.0.1:47390). Jan 29 12:03:39.847139 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 47390 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:39.848883 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:39.852728 systemd-logind[1547]: New session 21 of user core. Jan 29 12:03:39.860057 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:03:39.966405 sshd[4330]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:39.970791 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:47390.service: Deactivated successfully. Jan 29 12:03:39.973115 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:03:39.973918 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:03:39.974891 systemd-logind[1547]: Removed session 21. Jan 29 12:03:44.979112 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:41390.service - OpenSSH per-connection server daemon (10.0.0.1:41390). Jan 29 12:03:45.005015 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 41390 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:45.006580 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:45.010360 systemd-logind[1547]: New session 22 of user core. Jan 29 12:03:45.020116 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:03:45.129351 sshd[4348]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:45.133240 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:41390.service: Deactivated successfully. Jan 29 12:03:45.135703 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:03:45.135801 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:03:45.137004 systemd-logind[1547]: Removed session 22. Jan 29 12:03:50.141106 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:41406.service - OpenSSH per-connection server daemon (10.0.0.1:41406). Jan 29 12:03:50.166647 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:50.168089 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:50.171890 systemd-logind[1547]: New session 23 of user core. Jan 29 12:03:50.182064 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:03:50.282630 sshd[4365]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:50.286554 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:41406.service: Deactivated successfully. Jan 29 12:03:50.289108 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:03:50.289208 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:03:50.290308 systemd-logind[1547]: Removed session 23. Jan 29 12:03:55.293031 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:47780.service - OpenSSH per-connection server daemon (10.0.0.1:47780). Jan 29 12:03:55.322803 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 47780 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:55.324438 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:55.328024 systemd-logind[1547]: New session 24 of user core. Jan 29 12:03:55.334034 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:03:55.433133 sshd[4380]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:55.442109 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:47792.service - OpenSSH per-connection server daemon (10.0.0.1:47792). Jan 29 12:03:55.442616 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:47780.service: Deactivated successfully. Jan 29 12:03:55.445872 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:03:55.446507 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:03:55.447518 systemd-logind[1547]: Removed session 24. Jan 29 12:03:55.473518 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 47792 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:55.475313 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:55.479823 systemd-logind[1547]: New session 25 of user core. Jan 29 12:03:55.491158 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:03:56.824606 containerd[1571]: time="2025-01-29T12:03:56.824542932Z" level=info msg="StopContainer for \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\" with timeout 30 (s)" Jan 29 12:03:56.825313 containerd[1571]: time="2025-01-29T12:03:56.825267274Z" level=info msg="Stop container \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\" with signal terminated" Jan 29 12:03:56.873994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944-rootfs.mount: Deactivated successfully. Jan 29 12:03:56.878637 containerd[1571]: time="2025-01-29T12:03:56.878593320Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:03:56.880481 containerd[1571]: time="2025-01-29T12:03:56.880449010Z" level=info msg="StopContainer for \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\" with timeout 2 (s)" Jan 29 12:03:56.880688 containerd[1571]: time="2025-01-29T12:03:56.880657938Z" level=info msg="Stop container \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\" with signal terminated" Jan 29 12:03:56.882055 containerd[1571]: time="2025-01-29T12:03:56.881994187Z" level=info msg="shim disconnected" id=884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944 namespace=k8s.io Jan 29 12:03:56.882055 containerd[1571]: time="2025-01-29T12:03:56.882039723Z" level=warning msg="cleaning up after shim disconnected" id=884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944 namespace=k8s.io Jan 29 12:03:56.882055 containerd[1571]: time="2025-01-29T12:03:56.882048400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:56.887136 systemd-networkd[1243]: lxc_health: Link DOWN Jan 29 12:03:56.887148 systemd-networkd[1243]: lxc_health: Lost carrier Jan 29 12:03:56.906985 containerd[1571]: time="2025-01-29T12:03:56.906775661Z" level=info msg="StopContainer for \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\" returns successfully" Jan 29 12:03:56.911191 containerd[1571]: time="2025-01-29T12:03:56.910953799Z" level=info msg="StopPodSandbox for \"87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f\"" Jan 29 12:03:56.911191 containerd[1571]: time="2025-01-29T12:03:56.911045164Z" level=info msg="Container to stop \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.915054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f-shm.mount: Deactivated successfully. Jan 29 12:03:56.928354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b-rootfs.mount: Deactivated successfully. Jan 29 12:03:56.935904 containerd[1571]: time="2025-01-29T12:03:56.935831878Z" level=info msg="shim disconnected" id=b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b namespace=k8s.io Jan 29 12:03:56.935904 containerd[1571]: time="2025-01-29T12:03:56.935891783Z" level=warning msg="cleaning up after shim disconnected" id=b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b namespace=k8s.io Jan 29 12:03:56.935904 containerd[1571]: time="2025-01-29T12:03:56.935901150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:56.941769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f-rootfs.mount: Deactivated successfully. Jan 29 12:03:56.945988 containerd[1571]: time="2025-01-29T12:03:56.945922904Z" level=info msg="shim disconnected" id=87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f namespace=k8s.io Jan 29 12:03:56.945988 containerd[1571]: time="2025-01-29T12:03:56.945984462Z" level=warning msg="cleaning up after shim disconnected" id=87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f namespace=k8s.io Jan 29 12:03:56.946188 containerd[1571]: time="2025-01-29T12:03:56.945993268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:56.953635 containerd[1571]: time="2025-01-29T12:03:56.953528871Z" level=info msg="StopContainer for \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\" returns successfully" Jan 29 12:03:56.954222 containerd[1571]: time="2025-01-29T12:03:56.954036549Z" level=info msg="StopPodSandbox for \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\"" Jan 29 12:03:56.954222 containerd[1571]: time="2025-01-29T12:03:56.954074421Z" level=info msg="Container to stop \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.954222 containerd[1571]: time="2025-01-29T12:03:56.954086294Z" level=info msg="Container to stop \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.954222 containerd[1571]: time="2025-01-29T12:03:56.954095131Z" level=info msg="Container to stop \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.954222 containerd[1571]: time="2025-01-29T12:03:56.954104268Z" level=info msg="Container to stop \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.954222 containerd[1571]: time="2025-01-29T12:03:56.954113336Z" level=info msg="Container to stop \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.959904 containerd[1571]: time="2025-01-29T12:03:56.959859084Z" level=info msg="TearDown network for sandbox \"87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f\" successfully" Jan 29 12:03:56.959904 containerd[1571]: time="2025-01-29T12:03:56.959889472Z" level=info msg="StopPodSandbox for \"87cf589d86a394eab94f2f5c1db5f5a06b706ef382542fe18e3d2ad1e6f7529f\" returns successfully" Jan 29 12:03:56.981352 containerd[1571]: time="2025-01-29T12:03:56.981265341Z" level=info msg="shim disconnected" id=86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3 namespace=k8s.io Jan 29 12:03:56.981352 containerd[1571]: time="2025-01-29T12:03:56.981327830Z" level=warning msg="cleaning up after shim disconnected" id=86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3 namespace=k8s.io Jan 29 12:03:56.981352 containerd[1571]: time="2025-01-29T12:03:56.981337378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:56.995382 containerd[1571]: time="2025-01-29T12:03:56.995328623Z" level=info msg="TearDown network for sandbox \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" successfully" Jan 29 12:03:56.995382 containerd[1571]: time="2025-01-29T12:03:56.995367778Z" level=info msg="StopPodSandbox for \"86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3\" returns successfully" Jan 29 12:03:57.119487 kubelet[2730]: I0129 12:03:57.118623 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bedbb8f7-003d-4c64-a530-79785223949b-cilium-config-path\") pod \"bedbb8f7-003d-4c64-a530-79785223949b\" (UID: \"bedbb8f7-003d-4c64-a530-79785223949b\") " Jan 29 12:03:57.119487 kubelet[2730]: I0129 12:03:57.118674 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-kernel\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.119487 kubelet[2730]: I0129 12:03:57.118692 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-etc-cni-netd\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.119487 kubelet[2730]: I0129 12:03:57.118710 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-clustermesh-secrets\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.119487 kubelet[2730]: I0129 12:03:57.118725 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-config-path\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.119487 kubelet[2730]: I0129 12:03:57.118753 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vhv\" (UniqueName: \"kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-kube-api-access-v7vhv\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120030 kubelet[2730]: I0129 12:03:57.118769 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-cgroup\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120030 kubelet[2730]: I0129 12:03:57.118782 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-run\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120030 kubelet[2730]: I0129 12:03:57.118795 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-net\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120030 kubelet[2730]: I0129 12:03:57.118838 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-lib-modules\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120030 kubelet[2730]: I0129 12:03:57.118789 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.120030 kubelet[2730]: I0129 12:03:57.118854 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-xtables-lock\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120176 kubelet[2730]: I0129 12:03:57.118882 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk8k9\" (UniqueName: \"kubernetes.io/projected/bedbb8f7-003d-4c64-a530-79785223949b-kube-api-access-zk8k9\") pod \"bedbb8f7-003d-4c64-a530-79785223949b\" (UID: \"bedbb8f7-003d-4c64-a530-79785223949b\") " Jan 29 12:03:57.120176 kubelet[2730]: I0129 12:03:57.118892 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.120176 kubelet[2730]: I0129 12:03:57.118907 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cni-path\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120176 kubelet[2730]: I0129 12:03:57.118914 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.120176 kubelet[2730]: I0129 12:03:57.118924 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hostproc\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120176 kubelet[2730]: I0129 12:03:57.118941 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-bpf-maps\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120308 kubelet[2730]: I0129 12:03:57.118957 2730 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hubble-tls\") pod \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\" (UID: \"f1dc81e4-d50e-4420-8c0b-2ddaacff48ff\") " Jan 29 12:03:57.120308 kubelet[2730]: I0129 12:03:57.118985 2730 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.120308 kubelet[2730]: I0129 12:03:57.118995 2730 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.120308 kubelet[2730]: I0129 12:03:57.119005 2730 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.120308 kubelet[2730]: I0129 12:03:57.119344 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.120308 kubelet[2730]: I0129 12:03:57.119374 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.120448 kubelet[2730]: I0129 12:03:57.119388 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.120448 kubelet[2730]: I0129 12:03:57.119400 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.122351 kubelet[2730]: I0129 12:03:57.122020 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:03:57.122787 kubelet[2730]: I0129 12:03:57.122454 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedbb8f7-003d-4c64-a530-79785223949b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bedbb8f7-003d-4c64-a530-79785223949b" (UID: "bedbb8f7-003d-4c64-a530-79785223949b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:03:57.122787 kubelet[2730]: I0129 12:03:57.122503 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.122787 kubelet[2730]: I0129 12:03:57.122522 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.122787 kubelet[2730]: I0129 12:03:57.122539 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.122787 kubelet[2730]: I0129 12:03:57.122736 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:03:57.123221 kubelet[2730]: I0129 12:03:57.123185 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:03:57.124779 kubelet[2730]: I0129 12:03:57.124724 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-kube-api-access-v7vhv" (OuterVolumeSpecName: "kube-api-access-v7vhv") pod "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" (UID: "f1dc81e4-d50e-4420-8c0b-2ddaacff48ff"). InnerVolumeSpecName "kube-api-access-v7vhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:03:57.125484 kubelet[2730]: I0129 12:03:57.125459 2730 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedbb8f7-003d-4c64-a530-79785223949b-kube-api-access-zk8k9" (OuterVolumeSpecName: "kube-api-access-zk8k9") pod "bedbb8f7-003d-4c64-a530-79785223949b" (UID: "bedbb8f7-003d-4c64-a530-79785223949b"). InnerVolumeSpecName "kube-api-access-zk8k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:03:57.219766 kubelet[2730]: I0129 12:03:57.219701 2730 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bedbb8f7-003d-4c64-a530-79785223949b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.219766 kubelet[2730]: I0129 12:03:57.219755 2730 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.219766 kubelet[2730]: I0129 12:03:57.219769 2730 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.219766 kubelet[2730]: I0129 12:03:57.219778 2730 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v7vhv\" (UniqueName: \"kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-kube-api-access-v7vhv\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219790 2730 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219800 2730 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219836 2730 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219847 2730 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219858 2730 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zk8k9\" (UniqueName: \"kubernetes.io/projected/bedbb8f7-003d-4c64-a530-79785223949b-kube-api-access-zk8k9\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219868 2730 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219877 2730 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220012 kubelet[2730]: I0129 12:03:57.219886 2730 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.220187 kubelet[2730]: I0129 12:03:57.219894 2730 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 12:03:57.479852 kubelet[2730]: E0129 12:03:57.479797 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:57.658940 kubelet[2730]: I0129 12:03:57.658902 2730 scope.go:117] "RemoveContainer" containerID="884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944" Jan 29 12:03:57.661142 containerd[1571]: time="2025-01-29T12:03:57.661047654Z" level=info msg="RemoveContainer for \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\"" Jan 29 12:03:57.692697 containerd[1571]: time="2025-01-29T12:03:57.692642297Z" level=info msg="RemoveContainer for \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\" returns successfully" Jan 29 12:03:57.692983 kubelet[2730]: I0129 12:03:57.692956 2730 scope.go:117] "RemoveContainer" containerID="884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944" Jan 29 12:03:57.693304 containerd[1571]: time="2025-01-29T12:03:57.693249504Z" level=error msg="ContainerStatus for \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\": not found" Jan 29 12:03:57.693464 kubelet[2730]: E0129 12:03:57.693440 2730 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\": not found" containerID="884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944" Jan 29 12:03:57.693537 kubelet[2730]: I0129 12:03:57.693470 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944"} err="failed to get container status \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\": rpc error: code = NotFound desc = an error occurred when try to find container \"884240f65ca6b257e9c2380b030c76bbbb6ff1d6ee9b50e693b8d3eb18c74944\": not found" Jan 29 12:03:57.693564 kubelet[2730]: I0129 12:03:57.693537 2730 scope.go:117] "RemoveContainer" containerID="b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b" Jan 29 12:03:57.694580 containerd[1571]: time="2025-01-29T12:03:57.694548811Z" level=info msg="RemoveContainer for \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\"" Jan 29 12:03:57.698180 containerd[1571]: time="2025-01-29T12:03:57.698140659Z" level=info msg="RemoveContainer for \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\" returns successfully" Jan 29 12:03:57.698320 kubelet[2730]: I0129 12:03:57.698295 2730 scope.go:117] "RemoveContainer" containerID="86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f" Jan 29 12:03:57.699125 containerd[1571]: time="2025-01-29T12:03:57.699102152Z" level=info msg="RemoveContainer for \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\"" Jan 29 12:03:57.705791 containerd[1571]: time="2025-01-29T12:03:57.705766407Z" level=info msg="RemoveContainer for \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\" returns successfully" Jan 29 12:03:57.705911 kubelet[2730]: I0129 12:03:57.705886 2730 scope.go:117] "RemoveContainer" containerID="c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6" Jan 29 12:03:57.706706 containerd[1571]: time="2025-01-29T12:03:57.706668266Z" level=info msg="RemoveContainer for \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\"" Jan 29 12:03:57.710521 containerd[1571]: time="2025-01-29T12:03:57.710484101Z" level=info msg="RemoveContainer for \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\" returns successfully" Jan 29 12:03:57.710624 kubelet[2730]: I0129 12:03:57.710610 2730 scope.go:117] "RemoveContainer" containerID="f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3" Jan 29 12:03:57.711341 containerd[1571]: time="2025-01-29T12:03:57.711320676Z" level=info msg="RemoveContainer for \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\"" Jan 29 12:03:57.714781 containerd[1571]: time="2025-01-29T12:03:57.714757007Z" level=info msg="RemoveContainer for \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\" returns successfully" Jan 29 12:03:57.714896 kubelet[2730]: I0129 12:03:57.714880 2730 scope.go:117] "RemoveContainer" containerID="9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e" Jan 29 12:03:57.715574 containerd[1571]: time="2025-01-29T12:03:57.715544358Z" level=info msg="RemoveContainer for \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\"" Jan 29 12:03:57.718702 containerd[1571]: time="2025-01-29T12:03:57.718677732Z" level=info msg="RemoveContainer for \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\" returns successfully" Jan 29 12:03:57.718843 kubelet[2730]: I0129 12:03:57.718819 2730 scope.go:117] "RemoveContainer" containerID="b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b" Jan 29 12:03:57.718960 containerd[1571]: time="2025-01-29T12:03:57.718930352Z" level=error msg="ContainerStatus for \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\": not found" Jan 29 12:03:57.719052 kubelet[2730]: E0129 12:03:57.719031 2730 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\": not found" containerID="b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b" Jan 29 12:03:57.719086 kubelet[2730]: I0129 12:03:57.719053 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b"} err="failed to get container status \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b47af6b70483da7d4dfb81091debe96b873865272a46e44b1fcaf616c4337e7b\": not found" Jan 29 12:03:57.719086 kubelet[2730]: I0129 12:03:57.719070 2730 scope.go:117] "RemoveContainer" containerID="86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f" Jan 29 12:03:57.719212 containerd[1571]: time="2025-01-29T12:03:57.719189227Z" level=error msg="ContainerStatus for \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\": not found" Jan 29 12:03:57.719293 kubelet[2730]: E0129 12:03:57.719278 2730 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\": not found" containerID="86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f" Jan 29 12:03:57.719345 kubelet[2730]: I0129 12:03:57.719294 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f"} err="failed to get container status \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"86cc03faf806f207681b4108174d2795869e0a5a9baef9dd8799e6a929111b8f\": not found" Jan 29 12:03:57.719345 kubelet[2730]: I0129 12:03:57.719305 2730 scope.go:117] "RemoveContainer" containerID="c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6" Jan 29 12:03:57.719434 containerd[1571]: time="2025-01-29T12:03:57.719411160Z" level=error msg="ContainerStatus for \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\": not found" Jan 29 12:03:57.719531 kubelet[2730]: E0129 12:03:57.719511 2730 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\": not found" containerID="c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6" Jan 29 12:03:57.719574 kubelet[2730]: I0129 12:03:57.719536 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6"} err="failed to get container status \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8cc926574caf0d88f1cd02f7f59ac70b73438ae77b7e1ae98cfbd92e7f596d6\": not found" Jan 29 12:03:57.719574 kubelet[2730]: I0129 12:03:57.719552 2730 scope.go:117] "RemoveContainer" containerID="f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3" Jan 29 12:03:57.719691 containerd[1571]: time="2025-01-29T12:03:57.719664593Z" level=error msg="ContainerStatus for \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\": not found" Jan 29 12:03:57.719777 kubelet[2730]: E0129 12:03:57.719761 2730 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\": not found" containerID="f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3" Jan 29 12:03:57.719890 kubelet[2730]: I0129 12:03:57.719779 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3"} err="failed to get container status \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f09ff8a6e585caeb6c6aa8ac0ee170317ff65c3b6cb49d1a0eb00d3aefebb6e3\": not found" Jan 29 12:03:57.719890 kubelet[2730]: I0129 12:03:57.719790 2730 scope.go:117] "RemoveContainer" containerID="9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e" Jan 29 12:03:57.719974 containerd[1571]: time="2025-01-29T12:03:57.719947471Z" level=error msg="ContainerStatus for \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\": not found" Jan 29 12:03:57.720049 kubelet[2730]: E0129 12:03:57.720035 2730 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\": not found" containerID="9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e" Jan 29 12:03:57.720080 kubelet[2730]: I0129 12:03:57.720050 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e"} err="failed to get container status \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f8184761434af88c7fc092d3e0aeae0d41a97c7d6be00fa85c863d482f0fb0e\": not found" Jan 29 12:03:57.856784 systemd[1]: var-lib-kubelet-pods-bedbb8f7\x2d003d\x2d4c64\x2da530\x2d79785223949b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzk8k9.mount: Deactivated successfully. Jan 29 12:03:57.857028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3-rootfs.mount: Deactivated successfully. Jan 29 12:03:57.857208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86a7c85dcaba4021847d9da943330bf019f67eba3b65de1bda8d942fd3ebdca3-shm.mount: Deactivated successfully. Jan 29 12:03:57.857434 systemd[1]: var-lib-kubelet-pods-f1dc81e4\x2dd50e\x2d4420\x2d8c0b\x2d2ddaacff48ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7vhv.mount: Deactivated successfully. Jan 29 12:03:57.857720 systemd[1]: var-lib-kubelet-pods-f1dc81e4\x2dd50e\x2d4420\x2d8c0b\x2d2ddaacff48ff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:03:57.858034 systemd[1]: var-lib-kubelet-pods-f1dc81e4\x2dd50e\x2d4420\x2d8c0b\x2d2ddaacff48ff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:03:58.481289 kubelet[2730]: I0129 12:03:58.481251 2730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedbb8f7-003d-4c64-a530-79785223949b" path="/var/lib/kubelet/pods/bedbb8f7-003d-4c64-a530-79785223949b/volumes" Jan 29 12:03:58.481847 kubelet[2730]: I0129 12:03:58.481834 2730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" path="/var/lib/kubelet/pods/f1dc81e4-d50e-4420-8c0b-2ddaacff48ff/volumes" Jan 29 12:03:58.789075 sshd[4393]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:58.799139 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:47808.service - OpenSSH per-connection server daemon (10.0.0.1:47808). Jan 29 12:03:58.799712 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:47792.service: Deactivated successfully. Jan 29 12:03:58.803950 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:03:58.805057 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:03:58.806216 systemd-logind[1547]: Removed session 25. Jan 29 12:03:58.825489 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 47808 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:58.827075 sshd[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:58.831140 systemd-logind[1547]: New session 26 of user core. Jan 29 12:03:58.840130 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:03:59.286757 sshd[4558]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:59.297104 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:47810.service - OpenSSH per-connection server daemon (10.0.0.1:47810). Jan 29 12:03:59.304073 kubelet[2730]: I0129 12:03:59.296967 2730 topology_manager.go:215] "Topology Admit Handler" podUID="44f76d95-b0ef-4a51-94aa-5a91482b460a" podNamespace="kube-system" podName="cilium-lxvs7" Jan 29 12:03:59.304073 kubelet[2730]: E0129 12:03:59.297019 2730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" containerName="mount-bpf-fs" Jan 29 12:03:59.304073 kubelet[2730]: E0129 12:03:59.297027 2730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" containerName="clean-cilium-state" Jan 29 12:03:59.304073 kubelet[2730]: E0129 12:03:59.297034 2730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" containerName="cilium-agent" Jan 29 12:03:59.304073 kubelet[2730]: E0129 12:03:59.297041 2730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" containerName="mount-cgroup" Jan 29 12:03:59.304073 kubelet[2730]: E0129 12:03:59.297047 2730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" containerName="apply-sysctl-overwrites" Jan 29 12:03:59.304073 kubelet[2730]: E0129 12:03:59.297053 2730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bedbb8f7-003d-4c64-a530-79785223949b" containerName="cilium-operator" Jan 29 12:03:59.304073 kubelet[2730]: I0129 12:03:59.297072 2730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1dc81e4-d50e-4420-8c0b-2ddaacff48ff" containerName="cilium-agent" Jan 29 12:03:59.304073 kubelet[2730]: I0129 12:03:59.297078 2730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedbb8f7-003d-4c64-a530-79785223949b" containerName="cilium-operator" Jan 29 12:03:59.303114 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:47808.service: Deactivated successfully. Jan 29 12:03:59.305460 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:03:59.309319 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:03:59.312399 systemd-logind[1547]: Removed session 26. Jan 29 12:03:59.340341 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 47810 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:59.342036 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:59.346304 systemd-logind[1547]: New session 27 of user core. Jan 29 12:03:59.354054 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:03:59.405190 sshd[4572]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:59.421121 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:47824.service - OpenSSH per-connection server daemon (10.0.0.1:47824). Jan 29 12:03:59.421697 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:47810.service: Deactivated successfully. Jan 29 12:03:59.426599 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:03:59.426952 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:03:59.428235 systemd-logind[1547]: Removed session 27. Jan 29 12:03:59.429930 kubelet[2730]: I0129 12:03:59.429757 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-cni-path\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.429930 kubelet[2730]: I0129 12:03:59.429849 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-etc-cni-netd\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.429930 kubelet[2730]: I0129 12:03:59.429871 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-bpf-maps\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.429930 kubelet[2730]: I0129 12:03:59.429910 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-host-proc-sys-net\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430141 kubelet[2730]: I0129 12:03:59.429963 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44f76d95-b0ef-4a51-94aa-5a91482b460a-hubble-tls\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430141 kubelet[2730]: I0129 12:03:59.430000 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-lib-modules\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430141 kubelet[2730]: I0129 12:03:59.430021 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-xtables-lock\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430141 kubelet[2730]: I0129 12:03:59.430045 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-host-proc-sys-kernel\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430141 kubelet[2730]: I0129 12:03:59.430072 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-hostproc\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430141 kubelet[2730]: I0129 12:03:59.430094 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-cilium-cgroup\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430265 kubelet[2730]: I0129 12:03:59.430115 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44f76d95-b0ef-4a51-94aa-5a91482b460a-cilium-run\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430265 kubelet[2730]: I0129 12:03:59.430146 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/44f76d95-b0ef-4a51-94aa-5a91482b460a-cilium-ipsec-secrets\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430265 kubelet[2730]: I0129 12:03:59.430169 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44f76d95-b0ef-4a51-94aa-5a91482b460a-cilium-config-path\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430265 kubelet[2730]: I0129 12:03:59.430187 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nvsj\" (UniqueName: \"kubernetes.io/projected/44f76d95-b0ef-4a51-94aa-5a91482b460a-kube-api-access-7nvsj\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.430265 kubelet[2730]: I0129 12:03:59.430207 2730 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44f76d95-b0ef-4a51-94aa-5a91482b460a-clustermesh-secrets\") pod \"cilium-lxvs7\" (UID: \"44f76d95-b0ef-4a51-94aa-5a91482b460a\") " pod="kube-system/cilium-lxvs7" Jan 29 12:03:59.452184 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 47824 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:03:59.454160 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:59.458580 systemd-logind[1547]: New session 28 of user core. Jan 29 12:03:59.465243 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 12:03:59.479595 kubelet[2730]: E0129 12:03:59.479559 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.479838 kubelet[2730]: E0129 12:03:59.479797 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.541878 kubelet[2730]: E0129 12:03:59.541723 2730 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:03:59.609457 kubelet[2730]: E0129 12:03:59.609419 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.610109 containerd[1571]: time="2025-01-29T12:03:59.610048488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxvs7,Uid:44f76d95-b0ef-4a51-94aa-5a91482b460a,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:59.634711 containerd[1571]: time="2025-01-29T12:03:59.634623780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:59.634711 containerd[1571]: time="2025-01-29T12:03:59.634683234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:59.634711 containerd[1571]: time="2025-01-29T12:03:59.634697861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.634965 containerd[1571]: time="2025-01-29T12:03:59.634858457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.672133 containerd[1571]: time="2025-01-29T12:03:59.672072295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxvs7,Uid:44f76d95-b0ef-4a51-94aa-5a91482b460a,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\"" Jan 29 12:03:59.672874 kubelet[2730]: E0129 12:03:59.672853 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.675410 containerd[1571]: time="2025-01-29T12:03:59.675371289Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:03:59.693731 containerd[1571]: time="2025-01-29T12:03:59.693667411Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c77298c6f84ff31f46cef304444e0bd7886405272b8a1ce9e34d27283140caad\"" Jan 29 12:03:59.694191 containerd[1571]: time="2025-01-29T12:03:59.694154298Z" level=info msg="StartContainer for \"c77298c6f84ff31f46cef304444e0bd7886405272b8a1ce9e34d27283140caad\"" Jan 29 12:03:59.752723 containerd[1571]: time="2025-01-29T12:03:59.752666406Z" level=info msg="StartContainer for \"c77298c6f84ff31f46cef304444e0bd7886405272b8a1ce9e34d27283140caad\" returns successfully" Jan 29 12:03:59.794699 containerd[1571]: time="2025-01-29T12:03:59.794550960Z" level=info msg="shim disconnected" id=c77298c6f84ff31f46cef304444e0bd7886405272b8a1ce9e34d27283140caad namespace=k8s.io Jan 29 12:03:59.794699 containerd[1571]: time="2025-01-29T12:03:59.794615763Z" level=warning msg="cleaning up after shim disconnected" id=c77298c6f84ff31f46cef304444e0bd7886405272b8a1ce9e34d27283140caad namespace=k8s.io Jan 29 12:03:59.794699 containerd[1571]: time="2025-01-29T12:03:59.794627445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:00.671429 kubelet[2730]: E0129 12:04:00.671390 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:00.674284 containerd[1571]: time="2025-01-29T12:04:00.674244215Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:04:00.689855 containerd[1571]: time="2025-01-29T12:04:00.689736514Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d8a231ce84603fc81259a391dcb1acf6635190ffb2eeb40527eb96ab1c40145\"" Jan 29 12:04:00.690298 containerd[1571]: time="2025-01-29T12:04:00.690259551Z" level=info msg="StartContainer for \"6d8a231ce84603fc81259a391dcb1acf6635190ffb2eeb40527eb96ab1c40145\"" Jan 29 12:04:00.736234 containerd[1571]: time="2025-01-29T12:04:00.736193204Z" level=info msg="StartContainer for \"6d8a231ce84603fc81259a391dcb1acf6635190ffb2eeb40527eb96ab1c40145\" returns successfully" Jan 29 12:04:00.763596 containerd[1571]: time="2025-01-29T12:04:00.763536288Z" level=info msg="shim disconnected" id=6d8a231ce84603fc81259a391dcb1acf6635190ffb2eeb40527eb96ab1c40145 namespace=k8s.io Jan 29 12:04:00.763596 containerd[1571]: time="2025-01-29T12:04:00.763589309Z" level=warning msg="cleaning up after shim disconnected" id=6d8a231ce84603fc81259a391dcb1acf6635190ffb2eeb40527eb96ab1c40145 namespace=k8s.io Jan 29 12:04:00.763596 containerd[1571]: time="2025-01-29T12:04:00.763598587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:01.536156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8a231ce84603fc81259a391dcb1acf6635190ffb2eeb40527eb96ab1c40145-rootfs.mount: Deactivated successfully. Jan 29 12:04:01.675149 kubelet[2730]: E0129 12:04:01.675116 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:01.677113 containerd[1571]: time="2025-01-29T12:04:01.677074265Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:04:01.713988 containerd[1571]: time="2025-01-29T12:04:01.713929865Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2a936fbeca268085563682b3cdaeef3f5441e465bc7181198d28c1317b62759\"" Jan 29 12:04:01.714754 containerd[1571]: time="2025-01-29T12:04:01.714520720Z" level=info msg="StartContainer for \"e2a936fbeca268085563682b3cdaeef3f5441e465bc7181198d28c1317b62759\"" Jan 29 12:04:01.866438 containerd[1571]: time="2025-01-29T12:04:01.866335003Z" level=info msg="StartContainer for \"e2a936fbeca268085563682b3cdaeef3f5441e465bc7181198d28c1317b62759\" returns successfully" Jan 29 12:04:01.903025 containerd[1571]: time="2025-01-29T12:04:01.902958121Z" level=info msg="shim disconnected" id=e2a936fbeca268085563682b3cdaeef3f5441e465bc7181198d28c1317b62759 namespace=k8s.io Jan 29 12:04:01.903025 containerd[1571]: time="2025-01-29T12:04:01.903023877Z" level=warning msg="cleaning up after shim disconnected" id=e2a936fbeca268085563682b3cdaeef3f5441e465bc7181198d28c1317b62759 namespace=k8s.io Jan 29 12:04:01.903025 containerd[1571]: time="2025-01-29T12:04:01.903034617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:02.536288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2a936fbeca268085563682b3cdaeef3f5441e465bc7181198d28c1317b62759-rootfs.mount: Deactivated successfully. Jan 29 12:04:02.678604 kubelet[2730]: E0129 12:04:02.678572 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:02.680739 containerd[1571]: time="2025-01-29T12:04:02.680644120Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:04:02.711088 containerd[1571]: time="2025-01-29T12:04:02.711036066Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6a04999c24f0770605a4e5eca1553cafa1d96d3da611080afaa8a612325f2d23\"" Jan 29 12:04:02.711567 containerd[1571]: time="2025-01-29T12:04:02.711539023Z" level=info msg="StartContainer for \"6a04999c24f0770605a4e5eca1553cafa1d96d3da611080afaa8a612325f2d23\"" Jan 29 12:04:02.771205 containerd[1571]: time="2025-01-29T12:04:02.771163942Z" level=info msg="StartContainer for \"6a04999c24f0770605a4e5eca1553cafa1d96d3da611080afaa8a612325f2d23\" returns successfully" Jan 29 12:04:02.795371 containerd[1571]: time="2025-01-29T12:04:02.795249970Z" level=info msg="shim disconnected" id=6a04999c24f0770605a4e5eca1553cafa1d96d3da611080afaa8a612325f2d23 namespace=k8s.io Jan 29 12:04:02.795371 containerd[1571]: time="2025-01-29T12:04:02.795306167Z" level=warning msg="cleaning up after shim disconnected" id=6a04999c24f0770605a4e5eca1553cafa1d96d3da611080afaa8a612325f2d23 namespace=k8s.io Jan 29 12:04:02.795371 containerd[1571]: time="2025-01-29T12:04:02.795317128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:03.536168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a04999c24f0770605a4e5eca1553cafa1d96d3da611080afaa8a612325f2d23-rootfs.mount: Deactivated successfully. Jan 29 12:04:03.682294 kubelet[2730]: E0129 12:04:03.682267 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:03.684466 containerd[1571]: time="2025-01-29T12:04:03.683933934Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:04:03.978717 containerd[1571]: time="2025-01-29T12:04:03.978671439Z" level=info msg="CreateContainer within sandbox \"eb10985c6f3bcb5df69dbd80e6e9f258602924b624b8c3d54da5d1c966ce0b74\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89ffcd21502b3a70b1aafc676ddfde4eff015fd213c4fc6c385398941919e16a\"" Jan 29 12:04:03.979373 containerd[1571]: time="2025-01-29T12:04:03.979109493Z" level=info msg="StartContainer for \"89ffcd21502b3a70b1aafc676ddfde4eff015fd213c4fc6c385398941919e16a\"" Jan 29 12:04:04.076754 containerd[1571]: time="2025-01-29T12:04:04.076663592Z" level=info msg="StartContainer for \"89ffcd21502b3a70b1aafc676ddfde4eff015fd213c4fc6c385398941919e16a\" returns successfully" Jan 29 12:04:04.409853 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 12:04:04.687221 kubelet[2730]: E0129 12:04:04.687192 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:04.965654 kubelet[2730]: I0129 12:04:04.965518 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lxvs7" podStartSLOduration=5.965501834 podStartE2EDuration="5.965501834s" podCreationTimestamp="2025-01-29 12:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:04.965324256 +0000 UTC m=+90.571556886" watchObservedRunningTime="2025-01-29 12:04:04.965501834 +0000 UTC m=+90.571734464" Jan 29 12:04:05.689157 kubelet[2730]: E0129 12:04:05.689121 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:07.537580 systemd-networkd[1243]: lxc_health: Link UP Jan 29 12:04:07.545556 systemd-networkd[1243]: lxc_health: Gained carrier Jan 29 12:04:07.618123 kubelet[2730]: E0129 12:04:07.618093 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:07.692005 kubelet[2730]: E0129 12:04:07.691974 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:09.462896 systemd-networkd[1243]: lxc_health: Gained IPv6LL Jan 29 12:04:11.480100 kubelet[2730]: E0129 12:04:11.480058 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:14.396775 sshd[4581]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:14.400897 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:47824.service: Deactivated successfully. Jan 29 12:04:14.403084 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. Jan 29 12:04:14.403153 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 12:04:14.404302 systemd-logind[1547]: Removed session 28.