Jan 30 13:45:52.868582 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:45:52.868604 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:45:52.868616 kernel: BIOS-provided physical RAM map: Jan 30 13:45:52.868630 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:45:52.868636 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:45:52.868642 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:45:52.868649 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 30 13:45:52.868656 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 30 13:45:52.868662 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:45:52.868670 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:45:52.868677 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:45:52.868683 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:45:52.868689 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:45:52.868695 kernel: NX (Execute Disable) protection: active Jan 30 13:45:52.868703 kernel: APIC: Static calls initialized Jan 30 13:45:52.868711 kernel: SMBIOS 2.8 present. Jan 30 13:45:52.868718 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 30 13:45:52.868725 kernel: Hypervisor detected: KVM Jan 30 13:45:52.868732 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:45:52.868738 kernel: kvm-clock: using sched offset of 2130458510 cycles Jan 30 13:45:52.868745 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:45:52.868753 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:45:52.868760 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:45:52.868767 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:45:52.868774 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 30 13:45:52.868783 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:45:52.868790 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:45:52.868797 kernel: Using GB pages for direct mapping Jan 30 13:45:52.868803 kernel: ACPI: Early table checksum verification disabled Jan 30 13:45:52.868810 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 30 13:45:52.868817 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:52.868824 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:52.868831 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:52.868840 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 30 13:45:52.868847 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:52.868853 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:52.868860 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:52.868867 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:52.868874 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 30 13:45:52.868881 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 30 13:45:52.868891 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 30 13:45:52.868900 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 30 13:45:52.868907 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 30 13:45:52.868915 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 30 13:45:52.868922 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 30 13:45:52.868929 kernel: No NUMA configuration found Jan 30 13:45:52.868936 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 30 13:45:52.868943 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 30 13:45:52.868952 kernel: Zone ranges: Jan 30 13:45:52.868959 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:45:52.868966 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 30 13:45:52.868973 kernel: Normal empty Jan 30 13:45:52.868980 kernel: Movable zone start for each node Jan 30 13:45:52.868987 kernel: Early memory node ranges Jan 30 13:45:52.868994 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:45:52.869001 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 30 13:45:52.869008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 30 13:45:52.869017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:45:52.869024 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:45:52.869031 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:45:52.869038 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:45:52.869046 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:45:52.869053 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:45:52.869060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:45:52.869067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:45:52.869074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:45:52.869083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:45:52.869090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:45:52.869097 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:45:52.869104 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:45:52.869111 kernel: TSC deadline timer available Jan 30 13:45:52.869118 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:45:52.869125 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:45:52.869132 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:45:52.869139 kernel: kvm-guest: setup PV sched yield Jan 30 13:45:52.869147 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:45:52.869156 kernel: Booting paravirtualized kernel on KVM Jan 30 13:45:52.869163 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:45:52.869170 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:45:52.869177 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:45:52.869185 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:45:52.869191 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:45:52.869198 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:45:52.869205 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:45:52.869214 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:45:52.869224 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:45:52.869231 kernel: random: crng init done Jan 30 13:45:52.869238 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:45:52.869245 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:45:52.869252 kernel: Fallback order for Node 0: 0 Jan 30 13:45:52.869259 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 30 13:45:52.869266 kernel: Policy zone: DMA32 Jan 30 13:45:52.869273 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:45:52.869283 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 30 13:45:52.869290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:45:52.869297 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:45:52.869304 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:45:52.869311 kernel: Dynamic Preempt: voluntary Jan 30 13:45:52.869318 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:45:52.869326 kernel: rcu: RCU event tracing is enabled. Jan 30 13:45:52.869333 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:45:52.869340 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:45:52.869350 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:45:52.869357 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:45:52.869364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:45:52.869371 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:45:52.869378 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:45:52.869385 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:45:52.869392 kernel: Console: colour VGA+ 80x25 Jan 30 13:45:52.869399 kernel: printk: console [ttyS0] enabled Jan 30 13:45:52.869406 kernel: ACPI: Core revision 20230628 Jan 30 13:45:52.869416 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:45:52.869423 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:45:52.869430 kernel: x2apic enabled Jan 30 13:45:52.869437 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:45:52.869444 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:45:52.869452 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:45:52.869459 kernel: kvm-guest: setup PV IPIs Jan 30 13:45:52.869475 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:45:52.869495 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:45:52.869502 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:45:52.869510 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:45:52.869517 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:45:52.869527 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:45:52.869534 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:45:52.869542 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:45:52.869549 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:45:52.869557 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:45:52.869566 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:45:52.869574 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:45:52.869582 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:45:52.869589 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:45:52.869597 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:45:52.869605 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:45:52.869615 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:45:52.869641 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:45:52.869665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:45:52.869679 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:45:52.869698 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:45:52.869712 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:45:52.869731 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:45:52.869747 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:45:52.869763 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:45:52.869770 kernel: landlock: Up and running. Jan 30 13:45:52.869778 kernel: SELinux: Initializing. Jan 30 13:45:52.869788 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:45:52.869795 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:45:52.869803 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:45:52.869810 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:45:52.869818 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:45:52.869826 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:45:52.869833 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:45:52.869841 kernel: ... version: 0 Jan 30 13:45:52.869850 kernel: ... bit width: 48 Jan 30 13:45:52.869858 kernel: ... generic registers: 6 Jan 30 13:45:52.869865 kernel: ... value mask: 0000ffffffffffff Jan 30 13:45:52.869873 kernel: ... max period: 00007fffffffffff Jan 30 13:45:52.869880 kernel: ... fixed-purpose events: 0 Jan 30 13:45:52.869887 kernel: ... event mask: 000000000000003f Jan 30 13:45:52.869895 kernel: signal: max sigframe size: 1776 Jan 30 13:45:52.869902 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:45:52.869909 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:45:52.869917 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:45:52.869926 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:45:52.869934 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:45:52.869941 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:45:52.869948 kernel: smpboot: Max logical packages: 1 Jan 30 13:45:52.869956 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:45:52.869963 kernel: devtmpfs: initialized Jan 30 13:45:52.869979 kernel: x86/mm: Memory block size: 128MB Jan 30 13:45:52.869987 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:45:52.870002 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:45:52.870012 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:45:52.870019 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:45:52.870027 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:45:52.870034 kernel: audit: type=2000 audit(1738244752.352:1): state=initialized audit_enabled=0 res=1 Jan 30 13:45:52.870041 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:45:52.870049 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:45:52.870056 kernel: cpuidle: using governor menu Jan 30 13:45:52.870064 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:45:52.870071 kernel: dca service started, version 1.12.1 Jan 30 13:45:52.870081 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:45:52.870088 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:45:52.870096 kernel: PCI: Using configuration type 1 for base access Jan 30 13:45:52.870103 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:45:52.870111 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:45:52.870118 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:45:52.870126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:45:52.870133 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:45:52.870140 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:45:52.870150 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:45:52.870157 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:45:52.870165 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:45:52.870172 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:45:52.870179 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:45:52.870187 kernel: ACPI: Interpreter enabled Jan 30 13:45:52.870194 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:45:52.870202 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:45:52.870209 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:45:52.870219 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:45:52.870226 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:45:52.870234 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:45:52.870414 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:45:52.870573 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:45:52.870706 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:45:52.870717 kernel: PCI host bridge to bus 0000:00 Jan 30 13:45:52.870843 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:45:52.870955 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:45:52.871066 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:45:52.871238 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:45:52.871350 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:45:52.871460 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:45:52.871609 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:45:52.871767 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:45:52.871897 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:45:52.872020 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 30 13:45:52.872138 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 30 13:45:52.872318 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 30 13:45:52.872441 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:45:52.872646 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:45:52.872776 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 13:45:52.872896 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 30 13:45:52.873016 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 30 13:45:52.873144 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:45:52.873265 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:45:52.873385 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 30 13:45:52.873529 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 30 13:45:52.873670 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:45:52.873792 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 30 13:45:52.873914 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 30 13:45:52.874034 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 30 13:45:52.874153 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 30 13:45:52.874281 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:45:52.874406 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:45:52.874547 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:45:52.874678 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 30 13:45:52.874798 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 30 13:45:52.874926 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:45:52.875046 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:45:52.875057 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:45:52.875068 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:45:52.875076 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:45:52.875084 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:45:52.875091 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:45:52.875099 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:45:52.875106 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:45:52.875114 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:45:52.875121 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:45:52.875128 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:45:52.875138 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:45:52.875146 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:45:52.875153 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:45:52.875160 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:45:52.875168 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:45:52.875175 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:45:52.875183 kernel: iommu: Default domain type: Translated Jan 30 13:45:52.875190 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:45:52.875198 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:45:52.875207 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:45:52.875215 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:45:52.875222 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 30 13:45:52.875343 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:45:52.875463 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:45:52.875598 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:45:52.875609 kernel: vgaarb: loaded Jan 30 13:45:52.875616 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:45:52.875635 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:45:52.875642 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:45:52.875650 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:45:52.875658 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:45:52.875665 kernel: pnp: PnP ACPI init Jan 30 13:45:52.875801 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:45:52.875812 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:45:52.875820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:45:52.875830 kernel: NET: Registered PF_INET protocol family Jan 30 13:45:52.875838 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:45:52.875846 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:45:52.875853 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:45:52.875861 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:45:52.875868 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:45:52.875876 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:45:52.875883 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:45:52.875891 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:45:52.875900 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:45:52.875908 kernel: NET: Registered PF_XDP protocol family Jan 30 13:45:52.876018 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:45:52.876128 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:45:52.876239 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:45:52.876349 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:45:52.876458 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:45:52.876582 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:45:52.876596 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:45:52.876603 kernel: Initialise system trusted keyrings Jan 30 13:45:52.876611 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:45:52.876618 kernel: Key type asymmetric registered Jan 30 13:45:52.876633 kernel: Asymmetric key parser 'x509' registered Jan 30 13:45:52.876641 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:45:52.876648 kernel: io scheduler mq-deadline registered Jan 30 13:45:52.876656 kernel: io scheduler kyber registered Jan 30 13:45:52.876663 kernel: io scheduler bfq registered Jan 30 13:45:52.876673 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:45:52.876681 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:45:52.876689 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:45:52.876696 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:45:52.876704 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:45:52.876711 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:45:52.876719 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:45:52.876726 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:45:52.876734 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:45:52.876862 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:45:52.876977 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:45:52.876987 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:45:52.877098 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:45:52 UTC (1738244752) Jan 30 13:45:52.877211 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:45:52.877221 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:45:52.877228 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:45:52.877236 kernel: Segment Routing with IPv6 Jan 30 13:45:52.877246 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:45:52.877254 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:45:52.877261 kernel: Key type dns_resolver registered Jan 30 13:45:52.877269 kernel: IPI shorthand broadcast: enabled Jan 30 13:45:52.877276 kernel: sched_clock: Marking stable (562003503, 104817533)->(716464337, -49643301) Jan 30 13:45:52.877284 kernel: registered taskstats version 1 Jan 30 13:45:52.877291 kernel: Loading compiled-in X.509 certificates Jan 30 13:45:52.877299 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:45:52.877306 kernel: Key type .fscrypt registered Jan 30 13:45:52.877316 kernel: Key type fscrypt-provisioning registered Jan 30 13:45:52.877323 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:45:52.877331 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:45:52.877338 kernel: ima: No architecture policies found Jan 30 13:45:52.877346 kernel: clk: Disabling unused clocks Jan 30 13:45:52.877353 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:45:52.877361 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:45:52.877368 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:45:52.877376 kernel: Run /init as init process Jan 30 13:45:52.877385 kernel: with arguments: Jan 30 13:45:52.877392 kernel: /init Jan 30 13:45:52.877400 kernel: with environment: Jan 30 13:45:52.877407 kernel: HOME=/ Jan 30 13:45:52.877414 kernel: TERM=linux Jan 30 13:45:52.877422 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:45:52.877431 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:45:52.877441 systemd[1]: Detected virtualization kvm. Jan 30 13:45:52.877451 systemd[1]: Detected architecture x86-64. Jan 30 13:45:52.877459 systemd[1]: Running in initrd. Jan 30 13:45:52.877467 systemd[1]: No hostname configured, using default hostname. Jan 30 13:45:52.877474 systemd[1]: Hostname set to . Jan 30 13:45:52.877505 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:45:52.877513 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:45:52.877521 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:45:52.877529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:45:52.877541 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:45:52.877560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:45:52.877571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:45:52.877580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:45:52.877590 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:45:52.877601 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:45:52.877609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:45:52.877617 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:45:52.877634 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:45:52.877642 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:45:52.877650 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:45:52.877658 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:45:52.877666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:45:52.877677 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:45:52.877685 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:45:52.877693 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:45:52.877702 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:45:52.877710 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:45:52.877719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:45:52.877727 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:45:52.877735 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:45:52.877746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:45:52.877756 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:45:52.877764 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:45:52.877772 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:45:52.877781 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:45:52.877789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:52.877797 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:45:52.877805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:45:52.877831 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:45:52.877851 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:45:52.877863 systemd-journald[192]: Journal started Jan 30 13:45:52.877880 systemd-journald[192]: Runtime Journal (/run/log/journal/0e5fd7b924d24cc2b99d9b46a8d8efad) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:45:52.878791 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:45:52.870016 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:45:52.910742 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:45:52.910761 kernel: Bridge firewalling registered Jan 30 13:45:52.896948 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:45:52.913340 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:45:52.913828 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:45:52.916140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:52.918536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:45:52.944742 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:45:52.948031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:45:52.951153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:45:52.955671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:45:52.962531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:52.964176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:45:52.966500 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:45:52.974105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:45:52.974408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:45:52.976530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:45:52.982991 dracut-cmdline[226]: dracut-dracut-053 Jan 30 13:45:52.985653 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:45:53.018695 systemd-resolved[231]: Positive Trust Anchors: Jan 30 13:45:53.018712 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:45:53.018749 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:45:53.021897 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 30 13:45:53.023151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:45:53.029708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:45:53.060512 kernel: SCSI subsystem initialized Jan 30 13:45:53.069507 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:45:53.079506 kernel: iscsi: registered transport (tcp) Jan 30 13:45:53.099504 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:45:53.099528 kernel: QLogic iSCSI HBA Driver Jan 30 13:45:53.141891 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:45:53.150660 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:45:53.173667 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:45:53.173700 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:45:53.174678 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:45:53.213504 kernel: raid6: avx2x4 gen() 30676 MB/s Jan 30 13:45:53.230500 kernel: raid6: avx2x2 gen() 29919 MB/s Jan 30 13:45:53.247578 kernel: raid6: avx2x1 gen() 26030 MB/s Jan 30 13:45:53.247595 kernel: raid6: using algorithm avx2x4 gen() 30676 MB/s Jan 30 13:45:53.265581 kernel: raid6: .... xor() 7557 MB/s, rmw enabled Jan 30 13:45:53.265598 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:45:53.285500 kernel: xor: automatically using best checksumming function avx Jan 30 13:45:53.433515 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:45:53.445000 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:45:53.462644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:45:53.473998 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 30 13:45:53.478440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:45:53.479866 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:45:53.494913 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 30 13:45:53.525361 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:45:53.548767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:45:53.619815 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:45:53.627775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:45:53.652432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:45:53.655272 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:45:53.658238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:45:53.659540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:45:53.664505 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:45:53.669536 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:45:53.695324 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:45:53.695494 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:45:53.695509 kernel: AES CTR mode by8 optimization enabled Jan 30 13:45:53.695528 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:45:53.695540 kernel: GPT:9289727 != 19775487 Jan 30 13:45:53.695551 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:45:53.695561 kernel: GPT:9289727 != 19775487 Jan 30 13:45:53.695570 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:45:53.695580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:53.695590 kernel: libata version 3.00 loaded. Jan 30 13:45:53.675829 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:45:53.681583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:45:53.681717 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:53.684245 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:45:53.685433 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:45:53.685569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:53.686807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:53.698044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:53.698651 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:45:53.712017 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:45:53.727806 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:45:53.727833 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:45:53.727993 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:45:53.728131 kernel: scsi host0: ahci Jan 30 13:45:53.728290 kernel: scsi host1: ahci Jan 30 13:45:53.728443 kernel: scsi host2: ahci Jan 30 13:45:53.728610 kernel: scsi host3: ahci Jan 30 13:45:53.728755 kernel: scsi host4: ahci Jan 30 13:45:53.728897 kernel: scsi host5: ahci Jan 30 13:45:53.729043 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 30 13:45:53.729054 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 30 13:45:53.729065 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 30 13:45:53.729075 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 30 13:45:53.729086 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 30 13:45:53.729096 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 30 13:45:53.729110 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Jan 30 13:45:53.727054 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:45:53.762871 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (461) Jan 30 13:45:53.763883 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:53.777724 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:45:53.782934 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:45:53.783010 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:45:53.792650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:45:53.805718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:45:53.806611 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:45:53.828160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:53.916814 disk-uuid[555]: Primary Header is updated. Jan 30 13:45:53.916814 disk-uuid[555]: Secondary Entries is updated. Jan 30 13:45:53.916814 disk-uuid[555]: Secondary Header is updated. Jan 30 13:45:53.920209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:53.924514 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:53.926514 kernel: block device autoloading is deprecated and will be removed. Jan 30 13:45:54.041847 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:45:54.041905 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:45:54.041916 kernel: ata3.00: applying bridge limits Jan 30 13:45:54.044321 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:54.044340 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:54.044350 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:54.044509 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:54.045512 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:54.046507 kernel: ata3.00: configured for UDMA/100 Jan 30 13:45:54.048527 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:45:54.094049 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:45:54.106278 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:45:54.106300 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:45:54.929181 disk-uuid[564]: The operation has completed successfully. Jan 30 13:45:54.930554 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:54.951643 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:45:54.951761 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:45:54.980610 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:45:54.985896 sh[595]: Success Jan 30 13:45:54.997503 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:45:55.027760 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:45:55.039137 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:45:55.041510 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:45:55.051610 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:45:55.051644 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:55.051661 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:45:55.052640 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:45:55.053970 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:45:55.057913 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:45:55.059546 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:45:55.076609 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:45:55.078181 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:45:55.086218 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:55.086246 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:55.086257 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:45:55.089510 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:45:55.096878 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:45:55.098505 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:55.108316 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:45:55.114659 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:45:55.167978 ignition[693]: Ignition 2.19.0 Jan 30 13:45:55.168312 ignition[693]: Stage: fetch-offline Jan 30 13:45:55.168357 ignition[693]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:55.168368 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:55.168450 ignition[693]: parsed url from cmdline: "" Jan 30 13:45:55.168454 ignition[693]: no config URL provided Jan 30 13:45:55.168459 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:45:55.168468 ignition[693]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:45:55.168505 ignition[693]: op(1): [started] loading QEMU firmware config module Jan 30 13:45:55.168511 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:45:55.175767 ignition[693]: op(1): [finished] loading QEMU firmware config module Jan 30 13:45:55.189295 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:45:55.203671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:45:55.218113 ignition[693]: parsing config with SHA512: f0d4d1940b1637adbc72bc893b8e714c6f29251c344302b8b11b763fdf7b25ed3a5089949974fb98f52479213f8a43a76aecb90a49ed09aee692e01992d2d014 Jan 30 13:45:55.221697 unknown[693]: fetched base config from "system" Jan 30 13:45:55.222066 unknown[693]: fetched user config from "qemu" Jan 30 13:45:55.223444 ignition[693]: fetch-offline: fetch-offline passed Jan 30 13:45:55.223541 ignition[693]: Ignition finished successfully Jan 30 13:45:55.224284 systemd-networkd[783]: lo: Link UP Jan 30 13:45:55.224289 systemd-networkd[783]: lo: Gained carrier Jan 30 13:45:55.226108 systemd-networkd[783]: Enumeration completed Jan 30 13:45:55.226198 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:45:55.226632 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:45:55.226636 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:45:55.227672 systemd-networkd[783]: eth0: Link UP Jan 30 13:45:55.227675 systemd-networkd[783]: eth0: Gained carrier Jan 30 13:45:55.227682 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:45:55.253519 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:45:55.442807 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:45:55.445103 systemd[1]: Reached target network.target - Network. Jan 30 13:45:55.445328 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:45:55.452641 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:45:55.472772 ignition[788]: Ignition 2.19.0 Jan 30 13:45:55.472783 ignition[788]: Stage: kargs Jan 30 13:45:55.472984 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:55.472998 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:55.473886 ignition[788]: kargs: kargs passed Jan 30 13:45:55.473942 ignition[788]: Ignition finished successfully Jan 30 13:45:55.480007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:45:55.492659 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:45:55.502365 ignition[796]: Ignition 2.19.0 Jan 30 13:45:55.502375 ignition[796]: Stage: disks Jan 30 13:45:55.502536 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:55.502546 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:55.503287 ignition[796]: disks: disks passed Jan 30 13:45:55.503332 ignition[796]: Ignition finished successfully Jan 30 13:45:55.508540 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:45:55.509765 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:45:55.511597 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:45:55.512834 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:45:55.514827 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:45:55.515853 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:45:55.528621 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:45:55.539576 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:45:55.545844 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:45:55.553611 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:45:55.633507 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:45:55.634090 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:45:55.634686 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:45:55.646577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:45:55.648443 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:45:55.650053 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:45:55.658152 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Jan 30 13:45:55.658172 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:55.658183 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:55.658193 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:45:55.650101 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:45:55.662203 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:45:55.650127 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:45:55.658269 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:45:55.663626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:45:55.667282 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:45:55.704044 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:45:55.707730 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:45:55.712203 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:45:55.715665 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:45:55.790051 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:45:55.799576 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:45:55.801725 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:45:55.807507 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:55.826860 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:45:55.828727 ignition[927]: INFO : Ignition 2.19.0 Jan 30 13:45:55.828727 ignition[927]: INFO : Stage: mount Jan 30 13:45:55.828727 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:55.828727 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:55.828727 ignition[927]: INFO : mount: mount passed Jan 30 13:45:55.828727 ignition[927]: INFO : Ignition finished successfully Jan 30 13:45:55.829984 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:45:55.835587 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:45:56.051049 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:45:56.060722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:45:56.069337 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Jan 30 13:45:56.069368 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:56.069386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:56.071151 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:45:56.073510 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:45:56.075037 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:45:56.094032 ignition[958]: INFO : Ignition 2.19.0 Jan 30 13:45:56.094032 ignition[958]: INFO : Stage: files Jan 30 13:45:56.095973 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:56.095973 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:56.095973 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:45:56.095973 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:45:56.095973 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:45:56.103100 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:45:56.104705 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:45:56.106513 unknown[958]: wrote ssh authorized keys file for user: core Jan 30 13:45:56.107722 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:45:56.109130 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:45:56.109130 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:45:56.141843 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:45:56.217370 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:45:56.219466 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:45:56.219466 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:45:56.670998 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:45:56.689622 systemd-networkd[783]: eth0: Gained IPv6LL Jan 30 13:45:56.766436 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:45:56.766436 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:45:56.770130 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:45:56.771829 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:45:56.773610 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:45:56.775318 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:45:56.777041 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:45:56.778753 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:45:56.780457 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:45:56.782347 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:45:56.784184 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:45:56.785954 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:56.788446 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:56.790915 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:56.793015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:45:57.299615 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:45:58.382092 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:58.382092 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:45:58.386062 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:45:58.408369 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:45:58.414174 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:45:58.416085 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:45:58.416085 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:45:58.416085 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:45:58.416085 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:45:58.416085 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:45:58.416085 ignition[958]: INFO : files: files passed Jan 30 13:45:58.416085 ignition[958]: INFO : Ignition finished successfully Jan 30 13:45:58.427735 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:45:58.439617 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:45:58.440317 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:45:58.448934 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:45:58.449184 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:45:58.452194 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:45:58.456447 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:45:58.456447 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:45:58.459813 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:45:58.462654 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:45:58.462882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:45:58.477621 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:45:58.501837 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:45:58.501962 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:45:58.504578 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:45:58.506890 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:45:58.507145 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:45:58.519628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:45:58.533645 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:45:58.535022 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:45:58.548238 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:45:58.548422 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:45:58.551670 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:45:58.553636 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:45:58.553754 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:45:58.555011 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:45:58.555343 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:45:58.555856 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:45:58.561859 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:45:58.562923 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:45:58.563250 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:45:58.563944 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:45:58.570079 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:45:58.571416 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:45:58.573944 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:45:58.575635 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:45:58.575759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:45:58.578217 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:45:58.580247 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:45:58.581242 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:45:58.581348 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:45:58.584544 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:45:58.584665 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:45:58.585243 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:45:58.585366 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:45:58.588586 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:45:58.588968 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:45:58.595559 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:45:58.595722 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:45:58.598226 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:45:58.598590 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:45:58.598696 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:45:58.601526 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:45:58.601630 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:45:58.604113 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:45:58.604226 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:45:58.606174 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:45:58.606292 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:45:58.621641 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:45:58.622568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:45:58.622695 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:45:58.624351 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:45:58.626835 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:45:58.626984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:45:58.629542 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:45:58.629710 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:45:58.637134 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:45:58.637280 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:45:58.650286 ignition[1012]: INFO : Ignition 2.19.0 Jan 30 13:45:58.650286 ignition[1012]: INFO : Stage: umount Jan 30 13:45:58.652082 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:58.652082 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:58.653846 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:45:58.656031 ignition[1012]: INFO : umount: umount passed Jan 30 13:45:58.656830 ignition[1012]: INFO : Ignition finished successfully Jan 30 13:45:58.658942 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:45:58.659086 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:45:58.660211 systemd[1]: Stopped target network.target - Network. Jan 30 13:45:58.662781 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:45:58.662848 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:45:58.664784 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:45:58.664843 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:45:58.666898 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:45:58.666957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:45:58.669040 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:45:58.669099 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:45:58.671370 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:45:58.673545 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:45:58.678527 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 30 13:45:58.680625 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:45:58.680792 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:45:58.682842 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:45:58.682988 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:45:58.686924 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:45:58.686987 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:45:58.699593 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:45:58.700655 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:45:58.700727 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:45:58.702856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:45:58.702906 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:45:58.705019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:45:58.705067 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:45:58.707553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:45:58.707600 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:45:58.709803 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:45:58.721414 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:45:58.721590 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:45:58.724050 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:45:58.724221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:45:58.726087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:45:58.726167 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:45:58.727424 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:45:58.727518 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:45:58.729693 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:45:58.729742 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:45:58.731737 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:45:58.731785 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:45:58.733711 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:45:58.733757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:58.746637 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:45:58.747697 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:45:58.747756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:45:58.750035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:45:58.750081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:58.753374 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:45:58.753507 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:45:58.856718 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:45:58.856908 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:45:58.859494 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:45:58.860855 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:45:58.860929 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:45:58.873729 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:45:58.883245 systemd[1]: Switching root. Jan 30 13:45:58.921951 systemd-journald[192]: Journal stopped Jan 30 13:46:00.173769 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:46:00.173846 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:46:00.173869 kernel: SELinux: policy capability open_perms=1 Jan 30 13:46:00.173889 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:46:00.173907 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:46:00.173921 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:46:00.173936 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:46:00.173951 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:46:00.173966 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:46:00.173982 kernel: audit: type=1403 audit(1738244759.408:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:46:00.174003 systemd[1]: Successfully loaded SELinux policy in 40.743ms. Jan 30 13:46:00.174031 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.587ms. Jan 30 13:46:00.174047 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:46:00.174064 systemd[1]: Detected virtualization kvm. Jan 30 13:46:00.174079 systemd[1]: Detected architecture x86-64. Jan 30 13:46:00.174093 systemd[1]: Detected first boot. Jan 30 13:46:00.174108 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:46:00.174121 zram_generator::config[1056]: No configuration found. Jan 30 13:46:00.174134 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:46:00.174146 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:46:00.174158 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:46:00.174172 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:46:00.174185 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:46:00.174196 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:46:00.174208 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:46:00.174220 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:46:00.174231 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:46:00.174244 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:46:00.174255 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:46:00.174267 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:46:00.174282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:46:00.174294 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:46:00.174306 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:46:00.174317 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:46:00.174329 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:46:00.174342 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:46:00.174354 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:46:00.174365 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:46:00.174377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:46:00.174391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:46:00.174403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:46:00.174415 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:46:00.174436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:46:00.174448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:46:00.174459 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:46:00.174471 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:46:00.174507 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:46:00.174522 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:46:00.174534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:46:00.174546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:46:00.174558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:46:00.174575 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:46:00.174591 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:46:00.174607 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:46:00.174622 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:46:00.174635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:00.174649 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:46:00.174661 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:46:00.174672 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:46:00.174685 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:46:00.174697 systemd[1]: Reached target machines.target - Containers. Jan 30 13:46:00.174709 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:46:00.174720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:00.174732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:46:00.174746 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:46:00.174757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:00.174769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:46:00.174781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:00.174793 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:46:00.174804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:00.174816 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:46:00.174828 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:46:00.174843 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:46:00.174855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:46:00.174866 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:46:00.174878 kernel: fuse: init (API version 7.39) Jan 30 13:46:00.174889 kernel: loop: module loaded Jan 30 13:46:00.174900 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:46:00.174912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:46:00.174924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:46:00.174940 kernel: ACPI: bus type drm_connector registered Jan 30 13:46:00.174953 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:46:00.174987 systemd-journald[1133]: Collecting audit messages is disabled. Jan 30 13:46:00.175012 systemd-journald[1133]: Journal started Jan 30 13:46:00.175034 systemd-journald[1133]: Runtime Journal (/run/log/journal/0e5fd7b924d24cc2b99d9b46a8d8efad) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:45:59.933616 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:45:59.954679 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:45:59.955160 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:46:00.179405 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:46:00.181586 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:46:00.181642 systemd[1]: Stopped verity-setup.service. Jan 30 13:46:00.184512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:00.188215 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:46:00.189073 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:46:00.190321 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:46:00.191649 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:46:00.192788 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:46:00.194017 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:46:00.195308 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:46:00.196612 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:46:00.198109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:46:00.199742 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:46:00.199924 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:46:00.201450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:00.201655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:00.203228 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:46:00.203402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:46:00.204841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:00.205008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:00.206854 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:46:00.207025 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:46:00.208463 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:00.208682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:00.210309 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:46:00.211809 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:46:00.213635 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:46:00.228056 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:46:00.239657 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:46:00.242383 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:46:00.243674 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:46:00.243716 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:46:00.246220 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:46:00.248946 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:46:00.252668 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:46:00.254041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:00.257359 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:46:00.259693 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:46:00.261440 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:46:00.265214 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:46:00.266716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:46:00.268651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:00.271602 systemd-journald[1133]: Time spent on flushing to /var/log/journal/0e5fd7b924d24cc2b99d9b46a8d8efad is 25.564ms for 953 entries. Jan 30 13:46:00.271602 systemd-journald[1133]: System Journal (/var/log/journal/0e5fd7b924d24cc2b99d9b46a8d8efad) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:46:00.308212 systemd-journald[1133]: Received client request to flush runtime journal. Jan 30 13:46:00.308255 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:46:00.275688 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:46:00.279717 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:46:00.285009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:46:00.287231 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:46:00.289392 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:46:00.291259 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:46:00.296214 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:46:00.302329 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:46:00.313702 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:46:00.320219 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:46:00.322894 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:46:00.327115 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:00.339518 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:46:00.344441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:46:00.346195 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:46:00.350894 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:46:00.355189 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:46:00.368242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:46:00.370506 kernel: loop1: detected capacity change from 0 to 210664 Jan 30 13:46:00.389255 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 13:46:00.389277 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 13:46:00.396555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:46:00.403516 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:46:00.445517 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:46:00.458516 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 13:46:00.468511 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:46:00.478694 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:46:00.479655 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 30 13:46:00.483771 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:46:00.483787 systemd[1]: Reloading... Jan 30 13:46:00.527643 zram_generator::config[1219]: No configuration found. Jan 30 13:46:00.616810 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:46:00.671744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:00.726162 systemd[1]: Reloading finished in 241 ms. Jan 30 13:46:00.766872 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:46:00.768514 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:46:00.786809 systemd[1]: Starting ensure-sysext.service... Jan 30 13:46:00.789701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:46:00.795449 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:46:00.795463 systemd[1]: Reloading... Jan 30 13:46:00.817830 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:46:00.818297 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:46:00.819338 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:46:00.819763 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:46:00.819851 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:46:00.829825 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:46:00.830000 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:46:00.847664 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:46:00.847826 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:46:00.854576 zram_generator::config[1287]: No configuration found. Jan 30 13:46:00.968183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:01.018254 systemd[1]: Reloading finished in 222 ms. Jan 30 13:46:01.037161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:46:01.051011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:46:01.060195 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:01.062957 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:46:01.065578 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:46:01.071619 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:46:01.075721 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:46:01.078706 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:46:01.082763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:01.082927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:01.084364 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:01.090446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:01.093251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:01.094632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:01.096468 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:46:01.097588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:01.098588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:01.099018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:01.101287 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:46:01.103907 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:01.104101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:01.106437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:01.106908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:01.112348 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jan 30 13:46:01.116084 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:46:01.116304 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:46:01.124835 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:46:01.126911 augenrules[1353]: No rules Jan 30 13:46:01.126785 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:46:01.128800 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:01.134004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:01.134224 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:01.144188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:01.149730 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:01.156193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:01.157348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:01.157474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:01.158225 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:46:01.160819 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:46:01.162951 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:46:01.164997 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:46:01.166676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:01.166853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:01.168792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:01.168958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:01.185495 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:01.186557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:01.199296 systemd[1]: Finished ensure-sysext.service. Jan 30 13:46:01.201575 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1381) Jan 30 13:46:01.208448 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:01.209682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:01.218701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:01.223073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:46:01.226098 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:01.227468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:01.232649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:46:01.238728 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:46:01.239858 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:46:01.239894 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:01.240504 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:01.240688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:01.242190 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:46:01.242362 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:46:01.243940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:01.244106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:01.255034 systemd-resolved[1327]: Positive Trust Anchors: Jan 30 13:46:01.255217 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:46:01.255250 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:46:01.258200 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:46:01.259079 systemd-resolved[1327]: Defaulting to hostname 'linux'. Jan 30 13:46:01.263625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:46:01.267768 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:46:01.269053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:46:01.269093 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:46:01.270530 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:46:01.281562 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:46:01.288417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:46:01.295740 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:46:01.312588 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:46:01.316557 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:46:01.319632 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:46:01.319837 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:46:01.333669 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:46:01.340407 systemd-networkd[1398]: lo: Link UP Jan 30 13:46:01.340423 systemd-networkd[1398]: lo: Gained carrier Jan 30 13:46:01.342245 systemd-networkd[1398]: Enumeration completed Jan 30 13:46:01.342352 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:46:01.343068 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:01.343080 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:46:01.343700 systemd[1]: Reached target network.target - Network. Jan 30 13:46:01.344047 systemd-networkd[1398]: eth0: Link UP Jan 30 13:46:01.344059 systemd-networkd[1398]: eth0: Gained carrier Jan 30 13:46:01.344071 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:01.359622 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:46:01.360452 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Jan 30 13:46:01.360653 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:46:01.811513 systemd-resolved[1327]: Clock change detected. Flushing caches. Jan 30 13:46:01.811639 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:46:01.811722 systemd-timesyncd[1399]: Initial clock synchronization to Thu 2025-01-30 13:46:01.811482 UTC. Jan 30 13:46:01.812329 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:46:01.815820 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:46:01.819254 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:46:01.821076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:01.893311 kernel: kvm_amd: TSC scaling supported Jan 30 13:46:01.893487 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:46:01.893526 kernel: kvm_amd: Nested Paging enabled Jan 30 13:46:01.893556 kernel: kvm_amd: LBR virtualization supported Jan 30 13:46:01.893580 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:46:01.893606 kernel: kvm_amd: Virtual GIF supported Jan 30 13:46:01.912570 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:46:01.917394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:01.948564 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:46:01.961410 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:46:01.970530 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:46:02.004463 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:46:02.006021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:46:02.007120 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:46:02.008294 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:46:02.009542 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:46:02.011014 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:46:02.012212 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:46:02.013505 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:46:02.014838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:46:02.014869 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:46:02.015858 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:46:02.017535 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:46:02.020448 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:46:02.026694 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:46:02.029207 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:46:02.030986 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:46:02.032326 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:46:02.033452 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:46:02.034497 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:46:02.034523 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:46:02.035539 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:46:02.037777 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:46:02.041381 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:46:02.044089 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:46:02.045450 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:46:02.046887 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:46:02.049290 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:46:02.050922 jq[1433]: false Jan 30 13:46:02.058476 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:46:02.063408 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:46:02.064898 extend-filesystems[1434]: Found loop3 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found loop4 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found loop5 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found sr0 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda1 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda2 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda3 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found usr Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda4 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda6 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda7 Jan 30 13:46:02.064898 extend-filesystems[1434]: Found vda9 Jan 30 13:46:02.064898 extend-filesystems[1434]: Checking size of /dev/vda9 Jan 30 13:46:02.118409 extend-filesystems[1434]: Resized partition /dev/vda9 Jan 30 13:46:02.077014 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:46:02.070527 dbus-daemon[1432]: [system] SELinux support is enabled Jan 30 13:46:02.137422 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:46:02.142665 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1377) Jan 30 13:46:02.083505 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:46:02.085320 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:46:02.085839 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:46:02.149564 update_engine[1450]: I20250130 13:46:02.128046 1450 main.cc:92] Flatcar Update Engine starting Jan 30 13:46:02.149564 update_engine[1450]: I20250130 13:46:02.137003 1450 update_check_scheduler.cc:74] Next update check in 2m28s Jan 30 13:46:02.086631 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:46:02.090341 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:46:02.093204 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:46:02.098769 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:46:02.150260 jq[1452]: true Jan 30 13:46:02.107661 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:46:02.150470 jq[1465]: true Jan 30 13:46:02.107923 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:46:02.108357 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:46:02.108603 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:46:02.111715 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:46:02.111962 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:46:02.142927 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:46:02.143172 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:46:02.143195 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:46:02.143619 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:46:02.143634 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:46:02.147777 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:46:02.148297 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:46:02.155416 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:46:02.183712 tar[1457]: linux-amd64/helm Jan 30 13:46:02.186265 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:46:02.233577 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:46:02.255183 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:46:02.261182 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:46:02.261214 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:46:02.261809 systemd-logind[1448]: New seat seat0. Jan 30 13:46:02.267748 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:46:02.279394 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:41088.service - OpenSSH per-connection server daemon (10.0.0.1:41088). Jan 30 13:46:02.281136 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:46:02.283064 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:46:02.283348 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:46:02.287787 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:46:02.369774 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:46:02.372702 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:46:02.381721 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:46:02.384625 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:46:02.386156 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:46:02.549257 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:46:02.706890 systemd[1]: sshd@0-10.0.0.67:22-10.0.0.1:41088.service: Deactivated successfully. Jan 30 13:46:03.089154 sshd[1500]: Connection closed by authenticating user core 10.0.0.1 port 41088 [preauth] Jan 30 13:46:03.089836 containerd[1458]: time="2025-01-30T13:46:03.089743252Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:46:03.090014 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:46:03.090014 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:46:03.090014 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:46:03.094285 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jan 30 13:46:03.095622 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:46:03.095854 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:46:03.098950 tar[1457]: linux-amd64/LICENSE Jan 30 13:46:03.099022 tar[1457]: linux-amd64/README.md Jan 30 13:46:03.113017 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:46:03.113999 containerd[1458]: time="2025-01-30T13:46:03.113954837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:03.115608 containerd[1458]: time="2025-01-30T13:46:03.115571950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:03.115608 containerd[1458]: time="2025-01-30T13:46:03.115596997Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:46:03.115674 containerd[1458]: time="2025-01-30T13:46:03.115612075Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:46:03.115795 containerd[1458]: time="2025-01-30T13:46:03.115775111Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:46:03.115821 containerd[1458]: time="2025-01-30T13:46:03.115793455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:03.115880 containerd[1458]: time="2025-01-30T13:46:03.115864999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:03.115900 containerd[1458]: time="2025-01-30T13:46:03.115879687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116081 containerd[1458]: time="2025-01-30T13:46:03.116059053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116081 containerd[1458]: time="2025-01-30T13:46:03.116075704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116128 containerd[1458]: time="2025-01-30T13:46:03.116087396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116128 containerd[1458]: time="2025-01-30T13:46:03.116096794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116255 containerd[1458]: time="2025-01-30T13:46:03.116212461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116488 containerd[1458]: time="2025-01-30T13:46:03.116466527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116600 containerd[1458]: time="2025-01-30T13:46:03.116584739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:03.116621 containerd[1458]: time="2025-01-30T13:46:03.116600248Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:46:03.116707 containerd[1458]: time="2025-01-30T13:46:03.116693814Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:46:03.116763 containerd[1458]: time="2025-01-30T13:46:03.116751001Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:46:03.346453 systemd-networkd[1398]: eth0: Gained IPv6LL Jan 30 13:46:03.349614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:46:03.351461 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:46:03.364577 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:46:03.367058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:03.369128 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:46:03.385337 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:46:03.385595 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:46:03.387130 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:46:03.472045 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:46:03.526718 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:46:03.528740 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:46:03.530806 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:46:03.648875 containerd[1458]: time="2025-01-30T13:46:03.648738197Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:46:03.648875 containerd[1458]: time="2025-01-30T13:46:03.648825671Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:46:03.648875 containerd[1458]: time="2025-01-30T13:46:03.648846159Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:46:03.648875 containerd[1458]: time="2025-01-30T13:46:03.648864123Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:46:03.649062 containerd[1458]: time="2025-01-30T13:46:03.648880183Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:46:03.649157 containerd[1458]: time="2025-01-30T13:46:03.649121576Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:46:03.649484 containerd[1458]: time="2025-01-30T13:46:03.649440884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:46:03.649620 containerd[1458]: time="2025-01-30T13:46:03.649592699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:46:03.649620 containerd[1458]: time="2025-01-30T13:46:03.649616744Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:46:03.649694 containerd[1458]: time="2025-01-30T13:46:03.649633466Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:46:03.649694 containerd[1458]: time="2025-01-30T13:46:03.649651039Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649694 containerd[1458]: time="2025-01-30T13:46:03.649666958Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649694 containerd[1458]: time="2025-01-30T13:46:03.649682648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649789 containerd[1458]: time="2025-01-30T13:46:03.649700291Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649789 containerd[1458]: time="2025-01-30T13:46:03.649718886Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649789 containerd[1458]: time="2025-01-30T13:46:03.649734946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649789 containerd[1458]: time="2025-01-30T13:46:03.649752078Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649789 containerd[1458]: time="2025-01-30T13:46:03.649767968Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649792093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649810307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649827029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649842959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649858187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649875490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649892431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.649918 containerd[1458]: time="2025-01-30T13:46:03.649909273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.649933508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.649957083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.649973012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.649988622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.650004201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.650025791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.650053433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.650069523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650137 containerd[1458]: time="2025-01-30T13:46:03.650084061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650169832Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650196672Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650214545Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650249922Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650267665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650299264Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650313942Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:46:03.650373 containerd[1458]: time="2025-01-30T13:46:03.650328740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:46:03.650786 containerd[1458]: time="2025-01-30T13:46:03.650702130Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:46:03.650786 containerd[1458]: time="2025-01-30T13:46:03.650779234Z" level=info msg="Connect containerd service" Jan 30 13:46:03.650972 containerd[1458]: time="2025-01-30T13:46:03.650818508Z" level=info msg="using legacy CRI server" Jan 30 13:46:03.650972 containerd[1458]: time="2025-01-30T13:46:03.650827826Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:46:03.650972 containerd[1458]: time="2025-01-30T13:46:03.650947881Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:46:03.651655 containerd[1458]: time="2025-01-30T13:46:03.651629128Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652020232Z" level=info msg="Start subscribing containerd event" Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652371130Z" level=info msg="Start recovering state" Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652444588Z" level=info msg="Start event monitor" Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652456921Z" level=info msg="Start snapshots syncer" Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652472981Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652481998Z" level=info msg="Start streaming server" Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652251165Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:46:03.652656 containerd[1458]: time="2025-01-30T13:46:03.652641687Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:46:03.652977 containerd[1458]: time="2025-01-30T13:46:03.652704425Z" level=info msg="containerd successfully booted in 0.811660s" Jan 30 13:46:03.653167 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:46:04.110932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:04.112675 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:46:04.116315 systemd[1]: Startup finished in 691ms (kernel) + 6.718s (initrd) + 4.296s (userspace) = 11.706s. Jan 30 13:46:04.126489 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:46:04.555213 kubelet[1551]: E0130 13:46:04.555021 1551 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:46:04.559144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:46:04.559397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:46:12.716282 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:42726.service - OpenSSH per-connection server daemon (10.0.0.1:42726). Jan 30 13:46:12.747755 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 42726 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:12.750074 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:12.760591 systemd-logind[1448]: New session 1 of user core. Jan 30 13:46:12.762203 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:46:12.779498 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:46:12.791328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:46:12.794034 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:46:12.801953 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:46:12.908048 systemd[1569]: Queued start job for default target default.target. Jan 30 13:46:12.917528 systemd[1569]: Created slice app.slice - User Application Slice. Jan 30 13:46:12.917558 systemd[1569]: Reached target paths.target - Paths. Jan 30 13:46:12.917576 systemd[1569]: Reached target timers.target - Timers. Jan 30 13:46:12.919087 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:46:12.929678 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:46:12.929824 systemd[1569]: Reached target sockets.target - Sockets. Jan 30 13:46:12.929857 systemd[1569]: Reached target basic.target - Basic System. Jan 30 13:46:12.929901 systemd[1569]: Reached target default.target - Main User Target. Jan 30 13:46:12.929941 systemd[1569]: Startup finished in 121ms. Jan 30 13:46:12.930525 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:46:12.932476 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:46:12.994030 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:42728.service - OpenSSH per-connection server daemon (10.0.0.1:42728). Jan 30 13:46:13.028669 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 42728 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:13.030173 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:13.033950 systemd-logind[1448]: New session 2 of user core. Jan 30 13:46:13.043336 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:46:13.097121 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:13.107132 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:42728.service: Deactivated successfully. Jan 30 13:46:13.109073 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:46:13.110618 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:46:13.119454 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:42734.service - OpenSSH per-connection server daemon (10.0.0.1:42734). Jan 30 13:46:13.120265 systemd-logind[1448]: Removed session 2. Jan 30 13:46:13.147851 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 42734 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:13.149357 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:13.153110 systemd-logind[1448]: New session 3 of user core. Jan 30 13:46:13.162355 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:46:13.211076 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:13.225691 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:42734.service: Deactivated successfully. Jan 30 13:46:13.227220 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:46:13.228557 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:46:13.238507 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:42740.service - OpenSSH per-connection server daemon (10.0.0.1:42740). Jan 30 13:46:13.239485 systemd-logind[1448]: Removed session 3. Jan 30 13:46:13.263345 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 42740 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:13.264700 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:13.268554 systemd-logind[1448]: New session 4 of user core. Jan 30 13:46:13.278331 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:46:13.333594 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:13.342768 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:42740.service: Deactivated successfully. Jan 30 13:46:13.344246 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:46:13.345649 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:46:13.346887 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:42750.service - OpenSSH per-connection server daemon (10.0.0.1:42750). Jan 30 13:46:13.347715 systemd-logind[1448]: Removed session 4. Jan 30 13:46:13.376060 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 42750 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:13.377372 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:13.380871 systemd-logind[1448]: New session 5 of user core. Jan 30 13:46:13.390329 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:46:13.446734 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:46:13.447063 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:13.460995 sudo[1604]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:13.462975 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:13.473694 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:42750.service: Deactivated successfully. Jan 30 13:46:13.475308 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:46:13.476728 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:46:13.485456 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:42752.service - OpenSSH per-connection server daemon (10.0.0.1:42752). Jan 30 13:46:13.486278 systemd-logind[1448]: Removed session 5. Jan 30 13:46:13.511150 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 42752 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:13.512496 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:13.516003 systemd-logind[1448]: New session 6 of user core. Jan 30 13:46:13.523337 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:46:13.576459 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:46:13.576777 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:13.579981 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:13.585393 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:46:13.585762 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:13.602492 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:13.604066 auditctl[1616]: No rules Jan 30 13:46:13.604479 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:46:13.604679 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:13.607130 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:13.634522 augenrules[1634]: No rules Jan 30 13:46:13.636114 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:13.637317 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:13.638960 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:13.647881 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:42752.service: Deactivated successfully. Jan 30 13:46:13.649490 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:46:13.650856 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:46:13.652036 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:42766.service - OpenSSH per-connection server daemon (10.0.0.1:42766). Jan 30 13:46:13.652928 systemd-logind[1448]: Removed session 6. Jan 30 13:46:13.691581 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 42766 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:13.692924 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:13.696779 systemd-logind[1448]: New session 7 of user core. Jan 30 13:46:13.706334 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:46:13.759040 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:46:13.759374 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:14.138499 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:46:14.138602 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:46:14.422594 dockerd[1663]: time="2025-01-30T13:46:14.422429063Z" level=info msg="Starting up" Jan 30 13:46:14.503924 systemd[1]: var-lib-docker-metacopy\x2dcheck865847775-merged.mount: Deactivated successfully. Jan 30 13:46:14.533118 dockerd[1663]: time="2025-01-30T13:46:14.533047191Z" level=info msg="Loading containers: start." Jan 30 13:46:14.647259 kernel: Initializing XFRM netlink socket Jan 30 13:46:14.677294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:46:14.685528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:14.722510 systemd-networkd[1398]: docker0: Link UP Jan 30 13:46:14.837927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:14.842483 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:46:14.895752 kubelet[1771]: E0130 13:46:14.895692 1771 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:46:14.903278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:46:14.903543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:46:15.189727 dockerd[1663]: time="2025-01-30T13:46:15.189680373Z" level=info msg="Loading containers: done." Jan 30 13:46:15.204759 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck292547883-merged.mount: Deactivated successfully. Jan 30 13:46:15.367153 dockerd[1663]: time="2025-01-30T13:46:15.367026079Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:46:15.367299 dockerd[1663]: time="2025-01-30T13:46:15.367263515Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:46:15.367404 dockerd[1663]: time="2025-01-30T13:46:15.367387557Z" level=info msg="Daemon has completed initialization" Jan 30 13:46:15.869996 dockerd[1663]: time="2025-01-30T13:46:15.869922009Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:46:15.870176 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:46:16.648132 containerd[1458]: time="2025-01-30T13:46:16.648090512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:46:17.441660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474751634.mount: Deactivated successfully. Jan 30 13:46:18.701581 containerd[1458]: time="2025-01-30T13:46:18.701515076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:18.702212 containerd[1458]: time="2025-01-30T13:46:18.702176347Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:46:18.703467 containerd[1458]: time="2025-01-30T13:46:18.703432412Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:18.707016 containerd[1458]: time="2025-01-30T13:46:18.706966169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:18.708148 containerd[1458]: time="2025-01-30T13:46:18.708100697Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.059965811s" Jan 30 13:46:18.708148 containerd[1458]: time="2025-01-30T13:46:18.708137215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:46:18.730653 containerd[1458]: time="2025-01-30T13:46:18.730599420Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:46:21.185851 containerd[1458]: time="2025-01-30T13:46:21.185788165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:21.198448 containerd[1458]: time="2025-01-30T13:46:21.198349402Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:46:21.201281 containerd[1458]: time="2025-01-30T13:46:21.201221619Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:21.207375 containerd[1458]: time="2025-01-30T13:46:21.207319224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:21.208778 containerd[1458]: time="2025-01-30T13:46:21.208737113Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.478090024s" Jan 30 13:46:21.208778 containerd[1458]: time="2025-01-30T13:46:21.208775655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:46:21.235952 containerd[1458]: time="2025-01-30T13:46:21.235724804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:46:22.125978 containerd[1458]: time="2025-01-30T13:46:22.125910791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:22.126722 containerd[1458]: time="2025-01-30T13:46:22.126661749Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:46:22.128071 containerd[1458]: time="2025-01-30T13:46:22.128022812Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:22.130742 containerd[1458]: time="2025-01-30T13:46:22.130703719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:22.131635 containerd[1458]: time="2025-01-30T13:46:22.131601733Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 895.821345ms" Jan 30 13:46:22.131682 containerd[1458]: time="2025-01-30T13:46:22.131636158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:46:22.154070 containerd[1458]: time="2025-01-30T13:46:22.154014726Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:46:23.169592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887867797.mount: Deactivated successfully. Jan 30 13:46:23.911808 containerd[1458]: time="2025-01-30T13:46:23.911735289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:23.912587 containerd[1458]: time="2025-01-30T13:46:23.912548644Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:46:23.913953 containerd[1458]: time="2025-01-30T13:46:23.913922761Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:23.916258 containerd[1458]: time="2025-01-30T13:46:23.916216673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:23.916873 containerd[1458]: time="2025-01-30T13:46:23.916822029Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.762733385s" Jan 30 13:46:23.916917 containerd[1458]: time="2025-01-30T13:46:23.916872533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:46:23.939329 containerd[1458]: time="2025-01-30T13:46:23.939290235Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:46:24.449847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358300012.mount: Deactivated successfully. Jan 30 13:46:25.012499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:46:25.024596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:25.171515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:25.175733 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:46:25.376906 kubelet[1983]: E0130 13:46:25.376764 1983 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:46:25.381559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:46:25.381770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:46:25.416144 containerd[1458]: time="2025-01-30T13:46:25.416078710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:25.417078 containerd[1458]: time="2025-01-30T13:46:25.417030735Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:46:25.418443 containerd[1458]: time="2025-01-30T13:46:25.418394834Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:25.421118 containerd[1458]: time="2025-01-30T13:46:25.421083235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:25.422008 containerd[1458]: time="2025-01-30T13:46:25.421963837Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.482497643s" Jan 30 13:46:25.422008 containerd[1458]: time="2025-01-30T13:46:25.421994765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:46:25.444799 containerd[1458]: time="2025-01-30T13:46:25.444772171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:46:26.001834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214644388.mount: Deactivated successfully. Jan 30 13:46:26.008041 containerd[1458]: time="2025-01-30T13:46:26.007981780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:26.008814 containerd[1458]: time="2025-01-30T13:46:26.008762794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:46:26.010027 containerd[1458]: time="2025-01-30T13:46:26.009986279Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:26.012212 containerd[1458]: time="2025-01-30T13:46:26.012176206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:26.012901 containerd[1458]: time="2025-01-30T13:46:26.012868875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 568.071747ms" Jan 30 13:46:26.012945 containerd[1458]: time="2025-01-30T13:46:26.012899793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:46:26.037446 containerd[1458]: time="2025-01-30T13:46:26.037385993Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:46:26.644663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90898435.mount: Deactivated successfully. Jan 30 13:46:29.852757 containerd[1458]: time="2025-01-30T13:46:29.852693537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.855436 containerd[1458]: time="2025-01-30T13:46:29.855397177Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:46:29.856899 containerd[1458]: time="2025-01-30T13:46:29.856833300Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.859972 containerd[1458]: time="2025-01-30T13:46:29.859941800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.860993 containerd[1458]: time="2025-01-30T13:46:29.860954689Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.823512541s" Jan 30 13:46:29.860993 containerd[1458]: time="2025-01-30T13:46:29.860990576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:46:32.352607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:32.371456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:32.388708 systemd[1]: Reloading requested from client PID 2135 ('systemctl') (unit session-7.scope)... Jan 30 13:46:32.388723 systemd[1]: Reloading... Jan 30 13:46:32.461274 zram_generator::config[2175]: No configuration found. Jan 30 13:46:32.656879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:32.732718 systemd[1]: Reloading finished in 343 ms. Jan 30 13:46:32.779711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:32.783862 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:46:32.784095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:32.785705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:32.924249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:32.928753 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:46:32.967109 kubelet[2224]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:32.967109 kubelet[2224]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:46:32.967109 kubelet[2224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:32.968071 kubelet[2224]: I0130 13:46:32.968026 2224 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:46:33.332327 kubelet[2224]: I0130 13:46:33.332297 2224 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:46:33.332327 kubelet[2224]: I0130 13:46:33.332320 2224 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:46:33.332522 kubelet[2224]: I0130 13:46:33.332510 2224 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:46:33.346672 kubelet[2224]: I0130 13:46:33.346645 2224 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:46:33.348265 kubelet[2224]: E0130 13:46:33.348245 2224 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.356875 kubelet[2224]: I0130 13:46:33.356840 2224 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:46:33.358569 kubelet[2224]: I0130 13:46:33.358520 2224 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:46:33.358745 kubelet[2224]: I0130 13:46:33.358554 2224 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:46:33.359155 kubelet[2224]: I0130 13:46:33.359125 2224 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:46:33.359155 kubelet[2224]: I0130 13:46:33.359145 2224 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:46:33.359344 kubelet[2224]: I0130 13:46:33.359315 2224 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:33.360136 kubelet[2224]: I0130 13:46:33.360106 2224 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:46:33.360136 kubelet[2224]: I0130 13:46:33.360123 2224 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:46:33.360206 kubelet[2224]: I0130 13:46:33.360147 2224 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:46:33.360206 kubelet[2224]: I0130 13:46:33.360169 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:46:33.361157 kubelet[2224]: W0130 13:46:33.360557 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.361157 kubelet[2224]: E0130 13:46:33.360606 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.362110 kubelet[2224]: W0130 13:46:33.362074 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.362155 kubelet[2224]: E0130 13:46:33.362111 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.363883 kubelet[2224]: I0130 13:46:33.363863 2224 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:46:33.365679 kubelet[2224]: I0130 13:46:33.365642 2224 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:46:33.365679 kubelet[2224]: W0130 13:46:33.365696 2224 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:46:33.366392 kubelet[2224]: I0130 13:46:33.366353 2224 server.go:1264] "Started kubelet" Jan 30 13:46:33.367642 kubelet[2224]: I0130 13:46:33.367266 2224 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:46:33.368406 kubelet[2224]: I0130 13:46:33.368311 2224 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:46:33.369209 kubelet[2224]: I0130 13:46:33.369065 2224 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:46:33.370075 kubelet[2224]: I0130 13:46:33.369367 2224 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:46:33.370626 kubelet[2224]: I0130 13:46:33.370604 2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:46:33.372825 kubelet[2224]: E0130 13:46:33.372725 2224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c6e1145c154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:46:33.366331732 +0000 UTC m=+0.433503865,LastTimestamp:2025-01-30 13:46:33.366331732 +0000 UTC m=+0.433503865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:46:33.373517 kubelet[2224]: E0130 13:46:33.372961 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:33.373517 kubelet[2224]: I0130 13:46:33.372996 2224 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:46:33.373517 kubelet[2224]: I0130 13:46:33.373073 2224 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:46:33.373517 kubelet[2224]: I0130 13:46:33.373122 2224 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:46:33.373517 kubelet[2224]: E0130 13:46:33.373402 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Jan 30 13:46:33.373517 kubelet[2224]: W0130 13:46:33.373443 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.373517 kubelet[2224]: E0130 13:46:33.373489 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.373734 kubelet[2224]: E0130 13:46:33.373606 2224 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:46:33.374176 kubelet[2224]: I0130 13:46:33.374151 2224 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:46:33.375089 kubelet[2224]: I0130 13:46:33.375043 2224 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:46:33.375089 kubelet[2224]: I0130 13:46:33.375061 2224 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:46:33.386501 kubelet[2224]: I0130 13:46:33.386455 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:46:33.388441 kubelet[2224]: I0130 13:46:33.388414 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:46:33.388476 kubelet[2224]: I0130 13:46:33.388452 2224 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:46:33.388476 kubelet[2224]: I0130 13:46:33.388473 2224 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:46:33.388550 kubelet[2224]: E0130 13:46:33.388518 2224 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:46:33.391350 kubelet[2224]: I0130 13:46:33.391146 2224 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:46:33.391350 kubelet[2224]: I0130 13:46:33.391159 2224 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:46:33.391350 kubelet[2224]: I0130 13:46:33.391186 2224 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:33.393964 kubelet[2224]: W0130 13:46:33.393539 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.393964 kubelet[2224]: E0130 13:46:33.393593 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:33.474423 kubelet[2224]: I0130 13:46:33.474369 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:33.474760 kubelet[2224]: E0130 13:46:33.474719 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 30 13:46:33.488820 kubelet[2224]: E0130 13:46:33.488780 2224 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:46:33.574665 kubelet[2224]: E0130 13:46:33.574603 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Jan 30 13:46:33.676212 kubelet[2224]: I0130 13:46:33.676106 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:33.676460 kubelet[2224]: E0130 13:46:33.676437 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 30 13:46:33.689570 kubelet[2224]: E0130 13:46:33.689537 2224 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:46:33.975500 kubelet[2224]: E0130 13:46:33.975377 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Jan 30 13:46:34.078058 kubelet[2224]: I0130 13:46:34.078032 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:34.078452 kubelet[2224]: E0130 13:46:34.078402 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 30 13:46:34.090491 kubelet[2224]: E0130 13:46:34.090456 2224 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:46:34.119753 kubelet[2224]: I0130 13:46:34.119724 2224 policy_none.go:49] "None policy: Start" Jan 30 13:46:34.120331 kubelet[2224]: I0130 13:46:34.120298 2224 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:46:34.120331 kubelet[2224]: I0130 13:46:34.120321 2224 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:46:34.215042 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:46:34.229464 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:46:34.232485 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:46:34.242143 kubelet[2224]: I0130 13:46:34.242112 2224 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:46:34.242418 kubelet[2224]: I0130 13:46:34.242359 2224 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:46:34.242510 kubelet[2224]: I0130 13:46:34.242480 2224 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:46:34.243642 kubelet[2224]: E0130 13:46:34.243587 2224 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:46:34.429738 kubelet[2224]: W0130 13:46:34.429670 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.429738 kubelet[2224]: E0130 13:46:34.429734 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.494842 kubelet[2224]: W0130 13:46:34.494706 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.494842 kubelet[2224]: E0130 13:46:34.494781 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.505113 kubelet[2224]: W0130 13:46:34.505051 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.505164 kubelet[2224]: E0130 13:46:34.505119 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.776759 kubelet[2224]: E0130 13:46:34.776687 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Jan 30 13:46:34.847467 kubelet[2224]: W0130 13:46:34.847420 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.847467 kubelet[2224]: E0130 13:46:34.847471 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:34.879870 kubelet[2224]: I0130 13:46:34.879838 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:34.880222 kubelet[2224]: E0130 13:46:34.880187 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Jan 30 13:46:34.891480 kubelet[2224]: I0130 13:46:34.891443 2224 topology_manager.go:215] "Topology Admit Handler" podUID="c51541b983cd6159f0ad928e94ccad97" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:46:34.892108 kubelet[2224]: I0130 13:46:34.892090 2224 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:46:34.892698 kubelet[2224]: I0130 13:46:34.892682 2224 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:46:34.898217 systemd[1]: Created slice kubepods-burstable-podc51541b983cd6159f0ad928e94ccad97.slice - libcontainer container kubepods-burstable-podc51541b983cd6159f0ad928e94ccad97.slice. Jan 30 13:46:34.924718 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 30 13:46:34.939748 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 30 13:46:34.983400 kubelet[2224]: I0130 13:46:34.983375 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c51541b983cd6159f0ad928e94ccad97-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c51541b983cd6159f0ad928e94ccad97\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:34.983709 kubelet[2224]: I0130 13:46:34.983408 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:34.983709 kubelet[2224]: I0130 13:46:34.983424 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:34.983709 kubelet[2224]: I0130 13:46:34.983438 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:34.983709 kubelet[2224]: I0130 13:46:34.983460 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:34.983709 kubelet[2224]: I0130 13:46:34.983477 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c51541b983cd6159f0ad928e94ccad97-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c51541b983cd6159f0ad928e94ccad97\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:34.983818 kubelet[2224]: I0130 13:46:34.983530 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:34.983818 kubelet[2224]: I0130 13:46:34.983568 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:34.983818 kubelet[2224]: I0130 13:46:34.983589 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c51541b983cd6159f0ad928e94ccad97-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c51541b983cd6159f0ad928e94ccad97\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:35.222792 kubelet[2224]: E0130 13:46:35.222685 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:35.223492 containerd[1458]: time="2025-01-30T13:46:35.223455803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c51541b983cd6159f0ad928e94ccad97,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:35.237749 kubelet[2224]: E0130 13:46:35.237724 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:35.238154 containerd[1458]: time="2025-01-30T13:46:35.238110429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:35.242419 kubelet[2224]: E0130 13:46:35.242394 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:35.242815 containerd[1458]: time="2025-01-30T13:46:35.242785144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:35.485071 kubelet[2224]: E0130 13:46:35.484956 2224 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.67:6443: connect: connection refused Jan 30 13:46:35.799060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246581802.mount: Deactivated successfully. Jan 30 13:46:35.808227 containerd[1458]: time="2025-01-30T13:46:35.808159452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.809290 containerd[1458]: time="2025-01-30T13:46:35.809219687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.810350 containerd[1458]: time="2025-01-30T13:46:35.810300954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:46:35.811277 containerd[1458]: time="2025-01-30T13:46:35.811219527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:46:35.812197 containerd[1458]: time="2025-01-30T13:46:35.812165683Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.813764 containerd[1458]: time="2025-01-30T13:46:35.813728664Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.814865 containerd[1458]: time="2025-01-30T13:46:35.814836021Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:46:35.817110 containerd[1458]: time="2025-01-30T13:46:35.817076232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.818693 containerd[1458]: time="2025-01-30T13:46:35.818643772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.106664ms" Jan 30 13:46:35.819943 containerd[1458]: time="2025-01-30T13:46:35.819917237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.728147ms" Jan 30 13:46:35.825108 containerd[1458]: time="2025-01-30T13:46:35.825064871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.226894ms" Jan 30 13:46:35.946765 containerd[1458]: time="2025-01-30T13:46:35.946645169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:35.946866 containerd[1458]: time="2025-01-30T13:46:35.946746443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:35.946866 containerd[1458]: time="2025-01-30T13:46:35.946766832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.946967 containerd[1458]: time="2025-01-30T13:46:35.946890009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.949456 containerd[1458]: time="2025-01-30T13:46:35.948737527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:35.949456 containerd[1458]: time="2025-01-30T13:46:35.949306930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:35.949456 containerd[1458]: time="2025-01-30T13:46:35.949318903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.949456 containerd[1458]: time="2025-01-30T13:46:35.949380100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.955526 containerd[1458]: time="2025-01-30T13:46:35.955313402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:35.955526 containerd[1458]: time="2025-01-30T13:46:35.955422722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:35.955526 containerd[1458]: time="2025-01-30T13:46:35.955444424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.955839 containerd[1458]: time="2025-01-30T13:46:35.955536791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.972472 systemd[1]: Started cri-containerd-50fa3e6fc7e781647ce28927fb2081e1f0798ce02b7cf982ce0ea7f88bfbb191.scope - libcontainer container 50fa3e6fc7e781647ce28927fb2081e1f0798ce02b7cf982ce0ea7f88bfbb191. Jan 30 13:46:35.977454 systemd[1]: Started cri-containerd-0a7a4f0652e215a359cc1478773e97b9048d734c72784172c9e6b6fe114f8d59.scope - libcontainer container 0a7a4f0652e215a359cc1478773e97b9048d734c72784172c9e6b6fe114f8d59. Jan 30 13:46:35.979960 systemd[1]: Started cri-containerd-94eb7b0e84037570c47ddf904688636d7f5f6e716db73921268c0096d4846bcf.scope - libcontainer container 94eb7b0e84037570c47ddf904688636d7f5f6e716db73921268c0096d4846bcf. Jan 30 13:46:36.016783 containerd[1458]: time="2025-01-30T13:46:36.016405429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"50fa3e6fc7e781647ce28927fb2081e1f0798ce02b7cf982ce0ea7f88bfbb191\"" Jan 30 13:46:36.017614 kubelet[2224]: E0130 13:46:36.017584 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.020670 containerd[1458]: time="2025-01-30T13:46:36.020624187Z" level=info msg="CreateContainer within sandbox \"50fa3e6fc7e781647ce28927fb2081e1f0798ce02b7cf982ce0ea7f88bfbb191\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:46:36.023515 containerd[1458]: time="2025-01-30T13:46:36.023485833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7a4f0652e215a359cc1478773e97b9048d734c72784172c9e6b6fe114f8d59\"" Jan 30 13:46:36.024205 kubelet[2224]: E0130 13:46:36.024188 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.026124 containerd[1458]: time="2025-01-30T13:46:36.026096048Z" level=info msg="CreateContainer within sandbox \"0a7a4f0652e215a359cc1478773e97b9048d734c72784172c9e6b6fe114f8d59\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:46:36.026958 containerd[1458]: time="2025-01-30T13:46:36.026929146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c51541b983cd6159f0ad928e94ccad97,Namespace:kube-system,Attempt:0,} returns sandbox id \"94eb7b0e84037570c47ddf904688636d7f5f6e716db73921268c0096d4846bcf\"" Jan 30 13:46:36.027584 kubelet[2224]: E0130 13:46:36.027533 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.029192 containerd[1458]: time="2025-01-30T13:46:36.029155223Z" level=info msg="CreateContainer within sandbox \"94eb7b0e84037570c47ddf904688636d7f5f6e716db73921268c0096d4846bcf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:46:36.048485 containerd[1458]: time="2025-01-30T13:46:36.048456709Z" level=info msg="CreateContainer within sandbox \"50fa3e6fc7e781647ce28927fb2081e1f0798ce02b7cf982ce0ea7f88bfbb191\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e99682c343d2806fe5371536b077b435edb955bb648bdc57bdb9fa41fed0bb9f\"" Jan 30 13:46:36.048921 containerd[1458]: time="2025-01-30T13:46:36.048901752Z" level=info msg="StartContainer for \"e99682c343d2806fe5371536b077b435edb955bb648bdc57bdb9fa41fed0bb9f\"" Jan 30 13:46:36.053645 containerd[1458]: time="2025-01-30T13:46:36.053504918Z" level=info msg="CreateContainer within sandbox \"0a7a4f0652e215a359cc1478773e97b9048d734c72784172c9e6b6fe114f8d59\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bddac23b9dc17febd18a17ef889a7099f060cfb058b21b47e19bc6650aa30ab8\"" Jan 30 13:46:36.053911 containerd[1458]: time="2025-01-30T13:46:36.053887501Z" level=info msg="StartContainer for \"bddac23b9dc17febd18a17ef889a7099f060cfb058b21b47e19bc6650aa30ab8\"" Jan 30 13:46:36.057280 containerd[1458]: time="2025-01-30T13:46:36.057210972Z" level=info msg="CreateContainer within sandbox \"94eb7b0e84037570c47ddf904688636d7f5f6e716db73921268c0096d4846bcf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f7754de3bba371e45a16ccc3ed53ea33384eb3211e527e2df2ad306d26eb973\"" Jan 30 13:46:36.057820 containerd[1458]: time="2025-01-30T13:46:36.057794893Z" level=info msg="StartContainer for \"3f7754de3bba371e45a16ccc3ed53ea33384eb3211e527e2df2ad306d26eb973\"" Jan 30 13:46:36.074431 systemd[1]: Started cri-containerd-e99682c343d2806fe5371536b077b435edb955bb648bdc57bdb9fa41fed0bb9f.scope - libcontainer container e99682c343d2806fe5371536b077b435edb955bb648bdc57bdb9fa41fed0bb9f. Jan 30 13:46:36.078172 systemd[1]: Started cri-containerd-bddac23b9dc17febd18a17ef889a7099f060cfb058b21b47e19bc6650aa30ab8.scope - libcontainer container bddac23b9dc17febd18a17ef889a7099f060cfb058b21b47e19bc6650aa30ab8. Jan 30 13:46:36.084009 systemd[1]: Started cri-containerd-3f7754de3bba371e45a16ccc3ed53ea33384eb3211e527e2df2ad306d26eb973.scope - libcontainer container 3f7754de3bba371e45a16ccc3ed53ea33384eb3211e527e2df2ad306d26eb973. Jan 30 13:46:36.129198 containerd[1458]: time="2025-01-30T13:46:36.128999990Z" level=info msg="StartContainer for \"e99682c343d2806fe5371536b077b435edb955bb648bdc57bdb9fa41fed0bb9f\" returns successfully" Jan 30 13:46:36.140795 containerd[1458]: time="2025-01-30T13:46:36.140651899Z" level=info msg="StartContainer for \"bddac23b9dc17febd18a17ef889a7099f060cfb058b21b47e19bc6650aa30ab8\" returns successfully" Jan 30 13:46:36.147783 containerd[1458]: time="2025-01-30T13:46:36.147715672Z" level=info msg="StartContainer for \"3f7754de3bba371e45a16ccc3ed53ea33384eb3211e527e2df2ad306d26eb973\" returns successfully" Jan 30 13:46:36.399649 kubelet[2224]: E0130 13:46:36.399483 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.404498 kubelet[2224]: E0130 13:46:36.403141 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.405820 kubelet[2224]: E0130 13:46:36.405734 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.482536 kubelet[2224]: I0130 13:46:36.481826 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:37.164328 kubelet[2224]: E0130 13:46:37.163468 2224 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:46:37.266125 kubelet[2224]: I0130 13:46:37.266069 2224 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:46:37.276915 kubelet[2224]: E0130 13:46:37.276848 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:37.377245 kubelet[2224]: E0130 13:46:37.377192 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:37.407095 kubelet[2224]: E0130 13:46:37.407055 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:37.478024 kubelet[2224]: E0130 13:46:37.477904 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:37.578530 kubelet[2224]: E0130 13:46:37.578500 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:38.364035 kubelet[2224]: I0130 13:46:38.363970 2224 apiserver.go:52] "Watching apiserver" Jan 30 13:46:38.373967 kubelet[2224]: I0130 13:46:38.373910 2224 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:46:38.554633 kubelet[2224]: E0130 13:46:38.554597 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:39.360478 systemd[1]: Reloading requested from client PID 2501 ('systemctl') (unit session-7.scope)... Jan 30 13:46:39.360498 systemd[1]: Reloading... Jan 30 13:46:39.409197 kubelet[2224]: E0130 13:46:39.409160 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:39.433277 zram_generator::config[2543]: No configuration found. Jan 30 13:46:39.548578 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:39.640774 systemd[1]: Reloading finished in 279 ms. Jan 30 13:46:39.682371 kubelet[2224]: E0130 13:46:39.682246 2224 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.181f7c6e1145c154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:46:33.366331732 +0000 UTC m=+0.433503865,LastTimestamp:2025-01-30 13:46:33.366331732 +0000 UTC m=+0.433503865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:46:39.682358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:39.706612 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:46:39.706943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:39.718455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:39.858167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:39.862552 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:46:39.908584 kubelet[2585]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:39.908584 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:46:39.908584 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:39.908584 kubelet[2585]: I0130 13:46:39.908559 2585 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:46:39.912965 kubelet[2585]: I0130 13:46:39.912920 2585 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:46:39.912965 kubelet[2585]: I0130 13:46:39.912961 2585 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:46:39.913280 kubelet[2585]: I0130 13:46:39.913255 2585 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:46:39.914715 kubelet[2585]: I0130 13:46:39.914691 2585 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:46:39.916151 kubelet[2585]: I0130 13:46:39.916125 2585 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:46:39.924262 kubelet[2585]: I0130 13:46:39.924222 2585 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:46:39.924522 kubelet[2585]: I0130 13:46:39.924487 2585 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:46:39.924669 kubelet[2585]: I0130 13:46:39.924513 2585 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:46:39.924745 kubelet[2585]: I0130 13:46:39.924683 2585 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:46:39.924745 kubelet[2585]: I0130 13:46:39.924691 2585 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:46:39.924745 kubelet[2585]: I0130 13:46:39.924733 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:39.924840 kubelet[2585]: I0130 13:46:39.924827 2585 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:46:39.924840 kubelet[2585]: I0130 13:46:39.924839 2585 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:46:39.924884 kubelet[2585]: I0130 13:46:39.924858 2585 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:46:39.924884 kubelet[2585]: I0130 13:46:39.924872 2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:46:39.925447 kubelet[2585]: I0130 13:46:39.925251 2585 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:46:39.925768 kubelet[2585]: I0130 13:46:39.925675 2585 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:46:39.926052 kubelet[2585]: I0130 13:46:39.926035 2585 server.go:1264] "Started kubelet" Jan 30 13:46:39.927995 kubelet[2585]: I0130 13:46:39.927974 2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:46:39.929341 kubelet[2585]: I0130 13:46:39.929323 2585 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:46:39.930262 kubelet[2585]: I0130 13:46:39.929431 2585 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:46:39.930262 kubelet[2585]: I0130 13:46:39.929549 2585 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:46:39.932725 kubelet[2585]: I0130 13:46:39.932564 2585 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:46:39.933022 kubelet[2585]: E0130 13:46:39.933005 2585 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:46:39.933605 kubelet[2585]: I0130 13:46:39.933578 2585 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:46:39.937572 kubelet[2585]: I0130 13:46:39.936831 2585 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:46:39.939527 kubelet[2585]: I0130 13:46:39.938629 2585 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:46:39.939607 kubelet[2585]: I0130 13:46:39.939527 2585 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:46:39.939660 kubelet[2585]: I0130 13:46:39.939639 2585 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:46:39.941770 kubelet[2585]: I0130 13:46:39.941749 2585 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:46:39.942635 kubelet[2585]: I0130 13:46:39.942607 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:46:39.943857 kubelet[2585]: I0130 13:46:39.943842 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:46:39.943938 kubelet[2585]: I0130 13:46:39.943929 2585 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:46:39.943993 kubelet[2585]: I0130 13:46:39.943984 2585 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:46:39.944075 kubelet[2585]: E0130 13:46:39.944061 2585 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:46:39.973586 kubelet[2585]: I0130 13:46:39.973551 2585 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:46:39.973586 kubelet[2585]: I0130 13:46:39.973572 2585 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:46:39.973586 kubelet[2585]: I0130 13:46:39.973591 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:39.973745 kubelet[2585]: I0130 13:46:39.973725 2585 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:46:39.973769 kubelet[2585]: I0130 13:46:39.973735 2585 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:46:39.973769 kubelet[2585]: I0130 13:46:39.973752 2585 policy_none.go:49] "None policy: Start" Jan 30 13:46:39.974165 kubelet[2585]: I0130 13:46:39.974152 2585 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:46:39.974198 kubelet[2585]: I0130 13:46:39.974174 2585 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:46:39.974333 kubelet[2585]: I0130 13:46:39.974317 2585 state_mem.go:75] "Updated machine memory state" Jan 30 13:46:39.978254 kubelet[2585]: I0130 13:46:39.978217 2585 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:46:39.978679 kubelet[2585]: I0130 13:46:39.978401 2585 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:46:39.978679 kubelet[2585]: I0130 13:46:39.978494 2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:46:40.033477 kubelet[2585]: I0130 13:46:40.033451 2585 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:40.045222 kubelet[2585]: I0130 13:46:40.045186 2585 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:46:40.045326 kubelet[2585]: I0130 13:46:40.045288 2585 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:46:40.045445 kubelet[2585]: I0130 13:46:40.045398 2585 topology_manager.go:215] "Topology Admit Handler" podUID="c51541b983cd6159f0ad928e94ccad97" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:46:40.130002 kubelet[2585]: I0130 13:46:40.129973 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:40.130002 kubelet[2585]: I0130 13:46:40.130000 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:40.130136 kubelet[2585]: I0130 13:46:40.130020 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:40.130358 kubelet[2585]: I0130 13:46:40.130334 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:40.130432 kubelet[2585]: I0130 13:46:40.130360 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c51541b983cd6159f0ad928e94ccad97-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c51541b983cd6159f0ad928e94ccad97\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:40.130432 kubelet[2585]: I0130 13:46:40.130379 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c51541b983cd6159f0ad928e94ccad97-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c51541b983cd6159f0ad928e94ccad97\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:40.130432 kubelet[2585]: I0130 13:46:40.130396 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:40.130536 kubelet[2585]: I0130 13:46:40.130439 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:40.130536 kubelet[2585]: I0130 13:46:40.130453 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c51541b983cd6159f0ad928e94ccad97-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c51541b983cd6159f0ad928e94ccad97\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:40.228789 kubelet[2585]: E0130 13:46:40.228589 2585 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:40.228933 kubelet[2585]: I0130 13:46:40.228820 2585 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 13:46:40.228933 kubelet[2585]: I0130 13:46:40.228906 2585 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:46:40.368515 sudo[2625]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:46:40.369002 sudo[2625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:46:40.493009 kubelet[2585]: E0130 13:46:40.492887 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.493349 kubelet[2585]: E0130 13:46:40.493330 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.529560 kubelet[2585]: E0130 13:46:40.529528 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.851045 sudo[2625]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:40.926104 kubelet[2585]: I0130 13:46:40.926044 2585 apiserver.go:52] "Watching apiserver" Jan 30 13:46:40.930146 kubelet[2585]: I0130 13:46:40.930102 2585 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:46:40.958288 kubelet[2585]: E0130 13:46:40.957877 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.960359 kubelet[2585]: E0130 13:46:40.960336 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.963072 kubelet[2585]: E0130 13:46:40.962576 2585 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:40.963072 kubelet[2585]: E0130 13:46:40.962993 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.977013 kubelet[2585]: I0130 13:46:40.976948 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.976914126 podStartE2EDuration="976.914126ms" podCreationTimestamp="2025-01-30 13:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:40.976767506 +0000 UTC m=+1.110350471" watchObservedRunningTime="2025-01-30 13:46:40.976914126 +0000 UTC m=+1.110497091" Jan 30 13:46:40.990434 kubelet[2585]: I0130 13:46:40.990376 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.990363513 podStartE2EDuration="2.990363513s" podCreationTimestamp="2025-01-30 13:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:40.990187808 +0000 UTC m=+1.123770773" watchObservedRunningTime="2025-01-30 13:46:40.990363513 +0000 UTC m=+1.123946478" Jan 30 13:46:40.990603 kubelet[2585]: I0130 13:46:40.990436 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.990433476 podStartE2EDuration="990.433476ms" podCreationTimestamp="2025-01-30 13:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:40.984474388 +0000 UTC m=+1.118057363" watchObservedRunningTime="2025-01-30 13:46:40.990433476 +0000 UTC m=+1.124016441" Jan 30 13:46:41.958764 kubelet[2585]: E0130 13:46:41.958725 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:41.959563 kubelet[2585]: E0130 13:46:41.958834 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:42.121328 kubelet[2585]: E0130 13:46:42.121279 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:42.175505 sudo[1645]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:42.179901 sshd[1642]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:42.184080 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:42766.service: Deactivated successfully. Jan 30 13:46:42.185824 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:46:42.186031 systemd[1]: session-7.scope: Consumed 4.669s CPU time, 194.6M memory peak, 0B memory swap peak. Jan 30 13:46:42.186443 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:46:42.187273 systemd-logind[1448]: Removed session 7. Jan 30 13:46:46.968679 kubelet[2585]: E0130 13:46:46.968635 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:47.416120 update_engine[1450]: I20250130 13:46:47.416052 1450 update_attempter.cc:509] Updating boot flags... Jan 30 13:46:47.444269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2674) Jan 30 13:46:47.478269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2674) Jan 30 13:46:47.504696 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2674) Jan 30 13:46:47.966407 kubelet[2585]: E0130 13:46:47.966372 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:50.067909 kubelet[2585]: E0130 13:46:50.067870 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:50.970979 kubelet[2585]: E0130 13:46:50.970944 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:51.971952 kubelet[2585]: E0130 13:46:51.971911 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:52.125857 kubelet[2585]: E0130 13:46:52.125809 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:55.015595 kubelet[2585]: I0130 13:46:55.015557 2585 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:46:55.016893 kubelet[2585]: I0130 13:46:55.016341 2585 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:46:55.016932 containerd[1458]: time="2025-01-30T13:46:55.016078342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:46:55.047259 kubelet[2585]: I0130 13:46:55.045552 2585 topology_manager.go:215] "Topology Admit Handler" podUID="7500ef0d-01e2-4be5-ac52-42c217e9ef66" podNamespace="kube-system" podName="kube-proxy-6gw5d" Jan 30 13:46:55.055965 kubelet[2585]: I0130 13:46:55.055196 2585 topology_manager.go:215] "Topology Admit Handler" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" podNamespace="kube-system" podName="cilium-w8npn" Jan 30 13:46:55.059261 systemd[1]: Created slice kubepods-besteffort-pod7500ef0d_01e2_4be5_ac52_42c217e9ef66.slice - libcontainer container kubepods-besteffort-pod7500ef0d_01e2_4be5_ac52_42c217e9ef66.slice. Jan 30 13:46:55.070009 systemd[1]: Created slice kubepods-burstable-poda6786c32_44c6_44ae_bb6c_ec5d36f18d8d.slice - libcontainer container kubepods-burstable-poda6786c32_44c6_44ae_bb6c_ec5d36f18d8d.slice. Jan 30 13:46:55.221529 kubelet[2585]: I0130 13:46:55.221481 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7500ef0d-01e2-4be5-ac52-42c217e9ef66-lib-modules\") pod \"kube-proxy-6gw5d\" (UID: \"7500ef0d-01e2-4be5-ac52-42c217e9ef66\") " pod="kube-system/kube-proxy-6gw5d" Jan 30 13:46:55.221529 kubelet[2585]: I0130 13:46:55.221518 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-xtables-lock\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221679 kubelet[2585]: I0130 13:46:55.221541 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-clustermesh-secrets\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221679 kubelet[2585]: I0130 13:46:55.221562 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbff8\" (UniqueName: \"kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-kube-api-access-nbff8\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221679 kubelet[2585]: I0130 13:46:55.221652 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c89r\" (UniqueName: \"kubernetes.io/projected/7500ef0d-01e2-4be5-ac52-42c217e9ef66-kube-api-access-5c89r\") pod \"kube-proxy-6gw5d\" (UID: \"7500ef0d-01e2-4be5-ac52-42c217e9ef66\") " pod="kube-system/kube-proxy-6gw5d" Jan 30 13:46:55.221828 kubelet[2585]: I0130 13:46:55.221691 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-lib-modules\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221828 kubelet[2585]: I0130 13:46:55.221731 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-kernel\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221828 kubelet[2585]: I0130 13:46:55.221764 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-cgroup\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221828 kubelet[2585]: I0130 13:46:55.221789 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7500ef0d-01e2-4be5-ac52-42c217e9ef66-xtables-lock\") pod \"kube-proxy-6gw5d\" (UID: \"7500ef0d-01e2-4be5-ac52-42c217e9ef66\") " pod="kube-system/kube-proxy-6gw5d" Jan 30 13:46:55.221828 kubelet[2585]: I0130 13:46:55.221808 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hubble-tls\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221940 kubelet[2585]: I0130 13:46:55.221831 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-config-path\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221940 kubelet[2585]: I0130 13:46:55.221854 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7500ef0d-01e2-4be5-ac52-42c217e9ef66-kube-proxy\") pod \"kube-proxy-6gw5d\" (UID: \"7500ef0d-01e2-4be5-ac52-42c217e9ef66\") " pod="kube-system/kube-proxy-6gw5d" Jan 30 13:46:55.221940 kubelet[2585]: I0130 13:46:55.221874 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-run\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221940 kubelet[2585]: I0130 13:46:55.221897 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-net\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221940 kubelet[2585]: I0130 13:46:55.221917 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-bpf-maps\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.221940 kubelet[2585]: I0130 13:46:55.221936 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cni-path\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.222082 kubelet[2585]: I0130 13:46:55.221958 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-etc-cni-netd\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.222082 kubelet[2585]: I0130 13:46:55.221996 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hostproc\") pod \"cilium-w8npn\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " pod="kube-system/cilium-w8npn" Jan 30 13:46:55.368807 kubelet[2585]: E0130 13:46:55.368683 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:55.369666 containerd[1458]: time="2025-01-30T13:46:55.369453350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gw5d,Uid:7500ef0d-01e2-4be5-ac52-42c217e9ef66,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:55.372135 kubelet[2585]: E0130 13:46:55.372111 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:55.372536 containerd[1458]: time="2025-01-30T13:46:55.372502135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8npn,Uid:a6786c32-44c6-44ae-bb6c-ec5d36f18d8d,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:55.405617 containerd[1458]: time="2025-01-30T13:46:55.405499883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:55.405617 containerd[1458]: time="2025-01-30T13:46:55.405592748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:55.405617 containerd[1458]: time="2025-01-30T13:46:55.405608097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:55.405816 containerd[1458]: time="2025-01-30T13:46:55.405696864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:55.416577 containerd[1458]: time="2025-01-30T13:46:55.415900285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:55.416577 containerd[1458]: time="2025-01-30T13:46:55.415957513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:55.416577 containerd[1458]: time="2025-01-30T13:46:55.415976178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:55.416577 containerd[1458]: time="2025-01-30T13:46:55.416195583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:55.425412 systemd[1]: Started cri-containerd-14e74ffd0d5dc887745ee48cd8147f8fe679ca725d1f30dc6abda3baf7411e88.scope - libcontainer container 14e74ffd0d5dc887745ee48cd8147f8fe679ca725d1f30dc6abda3baf7411e88. Jan 30 13:46:55.431274 systemd[1]: Started cri-containerd-d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d.scope - libcontainer container d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d. Jan 30 13:46:55.455405 containerd[1458]: time="2025-01-30T13:46:55.455357396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8npn,Uid:a6786c32-44c6-44ae-bb6c-ec5d36f18d8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\"" Jan 30 13:46:55.456041 kubelet[2585]: E0130 13:46:55.456017 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:55.456212 containerd[1458]: time="2025-01-30T13:46:55.456159290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gw5d,Uid:7500ef0d-01e2-4be5-ac52-42c217e9ef66,Namespace:kube-system,Attempt:0,} returns sandbox id \"14e74ffd0d5dc887745ee48cd8147f8fe679ca725d1f30dc6abda3baf7411e88\"" Jan 30 13:46:55.456748 kubelet[2585]: E0130 13:46:55.456733 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:55.456937 containerd[1458]: time="2025-01-30T13:46:55.456906410Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:46:55.458798 containerd[1458]: time="2025-01-30T13:46:55.458764458Z" level=info msg="CreateContainer within sandbox \"14e74ffd0d5dc887745ee48cd8147f8fe679ca725d1f30dc6abda3baf7411e88\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:46:55.477962 containerd[1458]: time="2025-01-30T13:46:55.477902413Z" level=info msg="CreateContainer within sandbox \"14e74ffd0d5dc887745ee48cd8147f8fe679ca725d1f30dc6abda3baf7411e88\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4bc448bd6176513bbc97c6af99be6c77a7091f010346328d6f8eafdb99cbb80d\"" Jan 30 13:46:55.478620 containerd[1458]: time="2025-01-30T13:46:55.478494601Z" level=info msg="StartContainer for \"4bc448bd6176513bbc97c6af99be6c77a7091f010346328d6f8eafdb99cbb80d\"" Jan 30 13:46:55.505361 systemd[1]: Started cri-containerd-4bc448bd6176513bbc97c6af99be6c77a7091f010346328d6f8eafdb99cbb80d.scope - libcontainer container 4bc448bd6176513bbc97c6af99be6c77a7091f010346328d6f8eafdb99cbb80d. Jan 30 13:46:55.531933 containerd[1458]: time="2025-01-30T13:46:55.531887107Z" level=info msg="StartContainer for \"4bc448bd6176513bbc97c6af99be6c77a7091f010346328d6f8eafdb99cbb80d\" returns successfully" Jan 30 13:46:55.843387 kubelet[2585]: I0130 13:46:55.842994 2585 topology_manager.go:215] "Topology Admit Handler" podUID="bd150c15-5296-4a77-9e00-243a2abad1ad" podNamespace="kube-system" podName="cilium-operator-599987898-8pktg" Jan 30 13:46:55.853107 systemd[1]: Created slice kubepods-besteffort-podbd150c15_5296_4a77_9e00_243a2abad1ad.slice - libcontainer container kubepods-besteffort-podbd150c15_5296_4a77_9e00_243a2abad1ad.slice. Jan 30 13:46:55.981183 kubelet[2585]: E0130 13:46:55.979411 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:56.026605 kubelet[2585]: I0130 13:46:56.026555 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd150c15-5296-4a77-9e00-243a2abad1ad-cilium-config-path\") pod \"cilium-operator-599987898-8pktg\" (UID: \"bd150c15-5296-4a77-9e00-243a2abad1ad\") " pod="kube-system/cilium-operator-599987898-8pktg" Jan 30 13:46:56.026605 kubelet[2585]: I0130 13:46:56.026600 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp6df\" (UniqueName: \"kubernetes.io/projected/bd150c15-5296-4a77-9e00-243a2abad1ad-kube-api-access-tp6df\") pod \"cilium-operator-599987898-8pktg\" (UID: \"bd150c15-5296-4a77-9e00-243a2abad1ad\") " pod="kube-system/cilium-operator-599987898-8pktg" Jan 30 13:46:56.161573 kubelet[2585]: E0130 13:46:56.161459 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:56.162251 containerd[1458]: time="2025-01-30T13:46:56.161821135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8pktg,Uid:bd150c15-5296-4a77-9e00-243a2abad1ad,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:56.184694 containerd[1458]: time="2025-01-30T13:46:56.184609123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:56.184694 containerd[1458]: time="2025-01-30T13:46:56.184656703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:56.184694 containerd[1458]: time="2025-01-30T13:46:56.184668976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:56.184878 containerd[1458]: time="2025-01-30T13:46:56.184736945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:56.212368 systemd[1]: Started cri-containerd-39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72.scope - libcontainer container 39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72. Jan 30 13:46:56.246219 containerd[1458]: time="2025-01-30T13:46:56.246152388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8pktg,Uid:bd150c15-5296-4a77-9e00-243a2abad1ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72\"" Jan 30 13:46:56.246880 kubelet[2585]: E0130 13:46:56.246852 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:00.078200 kubelet[2585]: I0130 13:47:00.078144 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6gw5d" podStartSLOduration=5.078120408 podStartE2EDuration="5.078120408s" podCreationTimestamp="2025-01-30 13:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:55.988615769 +0000 UTC m=+16.122198734" watchObservedRunningTime="2025-01-30 13:47:00.078120408 +0000 UTC m=+20.211703373" Jan 30 13:47:03.057452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509063912.mount: Deactivated successfully. Jan 30 13:47:05.909401 containerd[1458]: time="2025-01-30T13:47:05.909341320Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:05.910145 containerd[1458]: time="2025-01-30T13:47:05.910100308Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:47:05.911280 containerd[1458]: time="2025-01-30T13:47:05.911257415Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:05.912771 containerd[1458]: time="2025-01-30T13:47:05.912729726Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.455789462s" Jan 30 13:47:05.912812 containerd[1458]: time="2025-01-30T13:47:05.912769310Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:47:05.915587 containerd[1458]: time="2025-01-30T13:47:05.914681409Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:47:05.922168 containerd[1458]: time="2025-01-30T13:47:05.922131348Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:47:05.934059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715982997.mount: Deactivated successfully. Jan 30 13:47:05.935921 containerd[1458]: time="2025-01-30T13:47:05.935884398Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\"" Jan 30 13:47:05.936367 containerd[1458]: time="2025-01-30T13:47:05.936344444Z" level=info msg="StartContainer for \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\"" Jan 30 13:47:05.968464 systemd[1]: Started cri-containerd-22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f.scope - libcontainer container 22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f. Jan 30 13:47:05.994281 containerd[1458]: time="2025-01-30T13:47:05.994212751Z" level=info msg="StartContainer for \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\" returns successfully" Jan 30 13:47:05.997123 kubelet[2585]: E0130 13:47:05.997090 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:06.007279 systemd[1]: cri-containerd-22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f.scope: Deactivated successfully. Jan 30 13:47:06.503208 containerd[1458]: time="2025-01-30T13:47:06.503123849Z" level=info msg="shim disconnected" id=22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f namespace=k8s.io Jan 30 13:47:06.503208 containerd[1458]: time="2025-01-30T13:47:06.503180986Z" level=warning msg="cleaning up after shim disconnected" id=22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f namespace=k8s.io Jan 30 13:47:06.503208 containerd[1458]: time="2025-01-30T13:47:06.503189783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:06.932510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f-rootfs.mount: Deactivated successfully. Jan 30 13:47:07.000094 kubelet[2585]: E0130 13:47:07.000051 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:07.001881 containerd[1458]: time="2025-01-30T13:47:07.001839792Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:47:07.192217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846859865.mount: Deactivated successfully. Jan 30 13:47:07.193605 containerd[1458]: time="2025-01-30T13:47:07.193543921Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\"" Jan 30 13:47:07.194507 containerd[1458]: time="2025-01-30T13:47:07.194370837Z" level=info msg="StartContainer for \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\"" Jan 30 13:47:07.225353 systemd[1]: Started cri-containerd-0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3.scope - libcontainer container 0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3. Jan 30 13:47:07.253937 containerd[1458]: time="2025-01-30T13:47:07.253859126Z" level=info msg="StartContainer for \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\" returns successfully" Jan 30 13:47:07.267692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:47:07.268029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:07.268125 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:07.273777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:07.274211 systemd[1]: cri-containerd-0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3.scope: Deactivated successfully. Jan 30 13:47:07.298143 containerd[1458]: time="2025-01-30T13:47:07.298071860Z" level=info msg="shim disconnected" id=0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3 namespace=k8s.io Jan 30 13:47:07.298143 containerd[1458]: time="2025-01-30T13:47:07.298138436Z" level=warning msg="cleaning up after shim disconnected" id=0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3 namespace=k8s.io Jan 30 13:47:07.298386 containerd[1458]: time="2025-01-30T13:47:07.298151320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:07.299633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:07.785796 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:38972.service - OpenSSH per-connection server daemon (10.0.0.1:38972). Jan 30 13:47:07.819164 sshd[3130]: Accepted publickey for core from 10.0.0.1 port 38972 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:07.820634 sshd[3130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:07.825186 systemd-logind[1448]: New session 8 of user core. Jan 30 13:47:07.831364 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:47:07.932575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3-rootfs.mount: Deactivated successfully. Jan 30 13:47:07.974926 sshd[3130]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:07.978683 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:38972.service: Deactivated successfully. Jan 30 13:47:07.980947 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:47:07.983183 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:47:07.984258 systemd-logind[1448]: Removed session 8. Jan 30 13:47:08.003949 kubelet[2585]: E0130 13:47:08.003923 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:08.008666 containerd[1458]: time="2025-01-30T13:47:08.008624583Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:47:08.027343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592863209.mount: Deactivated successfully. Jan 30 13:47:08.031247 containerd[1458]: time="2025-01-30T13:47:08.031192736Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\"" Jan 30 13:47:08.031819 containerd[1458]: time="2025-01-30T13:47:08.031799998Z" level=info msg="StartContainer for \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\"" Jan 30 13:47:08.064393 systemd[1]: Started cri-containerd-db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67.scope - libcontainer container db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67. Jan 30 13:47:08.095985 systemd[1]: cri-containerd-db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67.scope: Deactivated successfully. Jan 30 13:47:08.178898 containerd[1458]: time="2025-01-30T13:47:08.178826436Z" level=info msg="StartContainer for \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\" returns successfully" Jan 30 13:47:08.312811 containerd[1458]: time="2025-01-30T13:47:08.312745693Z" level=info msg="shim disconnected" id=db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67 namespace=k8s.io Jan 30 13:47:08.312811 containerd[1458]: time="2025-01-30T13:47:08.312802421Z" level=warning msg="cleaning up after shim disconnected" id=db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67 namespace=k8s.io Jan 30 13:47:08.312811 containerd[1458]: time="2025-01-30T13:47:08.312812249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:08.317524 containerd[1458]: time="2025-01-30T13:47:08.317436816Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:08.318438 containerd[1458]: time="2025-01-30T13:47:08.318400700Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:47:08.320365 containerd[1458]: time="2025-01-30T13:47:08.319624791Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:08.321033 containerd[1458]: time="2025-01-30T13:47:08.321006049Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.406295284s" Jan 30 13:47:08.321095 containerd[1458]: time="2025-01-30T13:47:08.321038811Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:47:08.324779 containerd[1458]: time="2025-01-30T13:47:08.324737436Z" level=info msg="CreateContainer within sandbox \"39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:47:08.340565 containerd[1458]: time="2025-01-30T13:47:08.340509649Z" level=info msg="CreateContainer within sandbox \"39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\"" Jan 30 13:47:08.341171 containerd[1458]: time="2025-01-30T13:47:08.341131308Z" level=info msg="StartContainer for \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\"" Jan 30 13:47:08.365348 systemd[1]: Started cri-containerd-c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241.scope - libcontainer container c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241. Jan 30 13:47:08.387653 containerd[1458]: time="2025-01-30T13:47:08.387612424Z" level=info msg="StartContainer for \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\" returns successfully" Jan 30 13:47:08.933367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67-rootfs.mount: Deactivated successfully. Jan 30 13:47:09.012258 kubelet[2585]: E0130 13:47:09.010382 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:09.020197 containerd[1458]: time="2025-01-30T13:47:09.020145921Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:47:09.024250 kubelet[2585]: E0130 13:47:09.023934 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:09.042848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688044337.mount: Deactivated successfully. Jan 30 13:47:09.046968 containerd[1458]: time="2025-01-30T13:47:09.046903736Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\"" Jan 30 13:47:09.047480 containerd[1458]: time="2025-01-30T13:47:09.047443671Z" level=info msg="StartContainer for \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\"" Jan 30 13:47:09.097377 systemd[1]: Started cri-containerd-4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4.scope - libcontainer container 4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4. Jan 30 13:47:09.124131 systemd[1]: cri-containerd-4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4.scope: Deactivated successfully. Jan 30 13:47:09.128715 containerd[1458]: time="2025-01-30T13:47:09.128673039Z" level=info msg="StartContainer for \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\" returns successfully" Jan 30 13:47:09.150523 containerd[1458]: time="2025-01-30T13:47:09.150442473Z" level=info msg="shim disconnected" id=4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4 namespace=k8s.io Jan 30 13:47:09.150523 containerd[1458]: time="2025-01-30T13:47:09.150502336Z" level=warning msg="cleaning up after shim disconnected" id=4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4 namespace=k8s.io Jan 30 13:47:09.150523 containerd[1458]: time="2025-01-30T13:47:09.150513908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:09.932271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4-rootfs.mount: Deactivated successfully. Jan 30 13:47:10.029490 kubelet[2585]: E0130 13:47:10.029458 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:10.029490 kubelet[2585]: E0130 13:47:10.029485 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:10.032053 containerd[1458]: time="2025-01-30T13:47:10.031954502Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:47:10.046396 kubelet[2585]: I0130 13:47:10.046327 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8pktg" podStartSLOduration=2.971610779 podStartE2EDuration="15.046308689s" podCreationTimestamp="2025-01-30 13:46:55 +0000 UTC" firstStartedPulling="2025-01-30 13:46:56.247474753 +0000 UTC m=+16.381057718" lastFinishedPulling="2025-01-30 13:47:08.322172663 +0000 UTC m=+28.455755628" observedRunningTime="2025-01-30 13:47:09.05579308 +0000 UTC m=+29.189376045" watchObservedRunningTime="2025-01-30 13:47:10.046308689 +0000 UTC m=+30.179891654" Jan 30 13:47:10.053186 containerd[1458]: time="2025-01-30T13:47:10.052546707Z" level=info msg="CreateContainer within sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\"" Jan 30 13:47:10.053216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185875921.mount: Deactivated successfully. Jan 30 13:47:10.053631 containerd[1458]: time="2025-01-30T13:47:10.053549922Z" level=info msg="StartContainer for \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\"" Jan 30 13:47:10.083366 systemd[1]: Started cri-containerd-2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348.scope - libcontainer container 2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348. Jan 30 13:47:10.118043 containerd[1458]: time="2025-01-30T13:47:10.117988799Z" level=info msg="StartContainer for \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\" returns successfully" Jan 30 13:47:10.244714 kubelet[2585]: I0130 13:47:10.244572 2585 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:47:10.264395 kubelet[2585]: I0130 13:47:10.264352 2585 topology_manager.go:215] "Topology Admit Handler" podUID="6304b62f-24f7-4ef7-83b5-5c55c295f9d6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4kfff" Jan 30 13:47:10.265432 kubelet[2585]: I0130 13:47:10.265397 2585 topology_manager.go:215] "Topology Admit Handler" podUID="2b0aa656-55a2-4094-b101-036e76301fb7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rzvt9" Jan 30 13:47:10.273698 systemd[1]: Created slice kubepods-burstable-pod6304b62f_24f7_4ef7_83b5_5c55c295f9d6.slice - libcontainer container kubepods-burstable-pod6304b62f_24f7_4ef7_83b5_5c55c295f9d6.slice. Jan 30 13:47:10.281979 systemd[1]: Created slice kubepods-burstable-pod2b0aa656_55a2_4094_b101_036e76301fb7.slice - libcontainer container kubepods-burstable-pod2b0aa656_55a2_4094_b101_036e76301fb7.slice. Jan 30 13:47:10.360299 kubelet[2585]: I0130 13:47:10.360202 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b0aa656-55a2-4094-b101-036e76301fb7-config-volume\") pod \"coredns-7db6d8ff4d-rzvt9\" (UID: \"2b0aa656-55a2-4094-b101-036e76301fb7\") " pod="kube-system/coredns-7db6d8ff4d-rzvt9" Jan 30 13:47:10.360299 kubelet[2585]: I0130 13:47:10.360268 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8xg\" (UniqueName: \"kubernetes.io/projected/2b0aa656-55a2-4094-b101-036e76301fb7-kube-api-access-nx8xg\") pod \"coredns-7db6d8ff4d-rzvt9\" (UID: \"2b0aa656-55a2-4094-b101-036e76301fb7\") " pod="kube-system/coredns-7db6d8ff4d-rzvt9" Jan 30 13:47:10.360299 kubelet[2585]: I0130 13:47:10.360296 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x998x\" (UniqueName: \"kubernetes.io/projected/6304b62f-24f7-4ef7-83b5-5c55c295f9d6-kube-api-access-x998x\") pod \"coredns-7db6d8ff4d-4kfff\" (UID: \"6304b62f-24f7-4ef7-83b5-5c55c295f9d6\") " pod="kube-system/coredns-7db6d8ff4d-4kfff" Jan 30 13:47:10.360543 kubelet[2585]: I0130 13:47:10.360320 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6304b62f-24f7-4ef7-83b5-5c55c295f9d6-config-volume\") pod \"coredns-7db6d8ff4d-4kfff\" (UID: \"6304b62f-24f7-4ef7-83b5-5c55c295f9d6\") " pod="kube-system/coredns-7db6d8ff4d-4kfff" Jan 30 13:47:10.578990 kubelet[2585]: E0130 13:47:10.578941 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:10.583037 containerd[1458]: time="2025-01-30T13:47:10.582019619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4kfff,Uid:6304b62f-24f7-4ef7-83b5-5c55c295f9d6,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:10.592172 kubelet[2585]: E0130 13:47:10.592125 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:10.592722 containerd[1458]: time="2025-01-30T13:47:10.592679720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzvt9,Uid:2b0aa656-55a2-4094-b101-036e76301fb7,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:11.033906 kubelet[2585]: E0130 13:47:11.033870 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:11.045799 kubelet[2585]: I0130 13:47:11.045734 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w8npn" podStartSLOduration=5.58874102 podStartE2EDuration="16.045714774s" podCreationTimestamp="2025-01-30 13:46:55 +0000 UTC" firstStartedPulling="2025-01-30 13:46:55.456620069 +0000 UTC m=+15.590203034" lastFinishedPulling="2025-01-30 13:47:05.913593812 +0000 UTC m=+26.047176788" observedRunningTime="2025-01-30 13:47:11.044929627 +0000 UTC m=+31.178512602" watchObservedRunningTime="2025-01-30 13:47:11.045714774 +0000 UTC m=+31.179297729" Jan 30 13:47:12.035288 kubelet[2585]: E0130 13:47:12.035249 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:12.358708 systemd-networkd[1398]: cilium_host: Link UP Jan 30 13:47:12.358936 systemd-networkd[1398]: cilium_net: Link UP Jan 30 13:47:12.358940 systemd-networkd[1398]: cilium_net: Gained carrier Jan 30 13:47:12.360167 systemd-networkd[1398]: cilium_host: Gained carrier Jan 30 13:47:12.456730 systemd-networkd[1398]: cilium_vxlan: Link UP Jan 30 13:47:12.456740 systemd-networkd[1398]: cilium_vxlan: Gained carrier Jan 30 13:47:12.530386 systemd-networkd[1398]: cilium_host: Gained IPv6LL Jan 30 13:47:12.652266 kernel: NET: Registered PF_ALG protocol family Jan 30 13:47:12.770448 systemd-networkd[1398]: cilium_net: Gained IPv6LL Jan 30 13:47:12.992740 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:38980.service - OpenSSH per-connection server daemon (10.0.0.1:38980). Jan 30 13:47:13.029589 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 38980 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:13.031706 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:13.037472 kubelet[2585]: E0130 13:47:13.037440 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:13.037737 systemd-logind[1448]: New session 9 of user core. Jan 30 13:47:13.041402 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:47:13.164792 sshd[3663]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:13.169021 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:38980.service: Deactivated successfully. Jan 30 13:47:13.171313 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:47:13.171990 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:47:13.172914 systemd-logind[1448]: Removed session 9. Jan 30 13:47:13.313053 systemd-networkd[1398]: lxc_health: Link UP Jan 30 13:47:13.330775 systemd-networkd[1398]: lxc_health: Gained carrier Jan 30 13:47:13.618389 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Jan 30 13:47:13.713506 systemd-networkd[1398]: lxc52fa823d812e: Link UP Jan 30 13:47:13.720283 kernel: eth0: renamed from tmp73199 Jan 30 13:47:13.728889 systemd-networkd[1398]: lxc52fa823d812e: Gained carrier Jan 30 13:47:13.746736 systemd-networkd[1398]: lxc71cae75ad871: Link UP Jan 30 13:47:13.748254 kernel: eth0: renamed from tmp745ad Jan 30 13:47:13.755014 systemd-networkd[1398]: lxc71cae75ad871: Gained carrier Jan 30 13:47:14.039412 kubelet[2585]: E0130 13:47:14.039381 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:15.154379 systemd-networkd[1398]: lxc71cae75ad871: Gained IPv6LL Jan 30 13:47:15.218387 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jan 30 13:47:15.346361 systemd-networkd[1398]: lxc52fa823d812e: Gained IPv6LL Jan 30 13:47:16.486258 kubelet[2585]: I0130 13:47:16.486199 2585 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:47:16.487074 kubelet[2585]: E0130 13:47:16.486998 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:17.044623 kubelet[2585]: E0130 13:47:17.044563 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:17.499594 containerd[1458]: time="2025-01-30T13:47:17.499225703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:17.499594 containerd[1458]: time="2025-01-30T13:47:17.499343403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:17.499594 containerd[1458]: time="2025-01-30T13:47:17.499356628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:17.499594 containerd[1458]: time="2025-01-30T13:47:17.499438423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:17.526365 systemd[1]: Started cri-containerd-745adba8b5b5525bccde1cb65f6362e5a058ae3842d493ef020df96d6e0590ef.scope - libcontainer container 745adba8b5b5525bccde1cb65f6362e5a058ae3842d493ef020df96d6e0590ef. Jan 30 13:47:17.537127 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:17.563597 containerd[1458]: time="2025-01-30T13:47:17.562345322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzvt9,Uid:2b0aa656-55a2-4094-b101-036e76301fb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"745adba8b5b5525bccde1cb65f6362e5a058ae3842d493ef020df96d6e0590ef\"" Jan 30 13:47:17.563720 kubelet[2585]: E0130 13:47:17.563140 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:17.565476 containerd[1458]: time="2025-01-30T13:47:17.564030838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:17.565476 containerd[1458]: time="2025-01-30T13:47:17.564803629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:17.565476 containerd[1458]: time="2025-01-30T13:47:17.564820922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:17.565476 containerd[1458]: time="2025-01-30T13:47:17.564991683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:17.565589 containerd[1458]: time="2025-01-30T13:47:17.565538389Z" level=info msg="CreateContainer within sandbox \"745adba8b5b5525bccde1cb65f6362e5a058ae3842d493ef020df96d6e0590ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:47:17.595373 systemd[1]: Started cri-containerd-7319938a5e9953d3068439a763b81914c1fed1df2fddf886ee6acd5edb4b90ef.scope - libcontainer container 7319938a5e9953d3068439a763b81914c1fed1df2fddf886ee6acd5edb4b90ef. Jan 30 13:47:17.606723 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:17.631000 containerd[1458]: time="2025-01-30T13:47:17.630935877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4kfff,Uid:6304b62f-24f7-4ef7-83b5-5c55c295f9d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7319938a5e9953d3068439a763b81914c1fed1df2fddf886ee6acd5edb4b90ef\"" Jan 30 13:47:17.631732 kubelet[2585]: E0130 13:47:17.631709 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:17.633259 containerd[1458]: time="2025-01-30T13:47:17.633205541Z" level=info msg="CreateContainer within sandbox \"7319938a5e9953d3068439a763b81914c1fed1df2fddf886ee6acd5edb4b90ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:47:17.633613 containerd[1458]: time="2025-01-30T13:47:17.633582198Z" level=info msg="CreateContainer within sandbox \"745adba8b5b5525bccde1cb65f6362e5a058ae3842d493ef020df96d6e0590ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e11ec4ab110023e92fae4b190e6321df501a4e29d9eb829bcc8ae69537822336\"" Jan 30 13:47:17.634608 containerd[1458]: time="2025-01-30T13:47:17.634567950Z" level=info msg="StartContainer for \"e11ec4ab110023e92fae4b190e6321df501a4e29d9eb829bcc8ae69537822336\"" Jan 30 13:47:17.652836 containerd[1458]: time="2025-01-30T13:47:17.652644424Z" level=info msg="CreateContainer within sandbox \"7319938a5e9953d3068439a763b81914c1fed1df2fddf886ee6acd5edb4b90ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31d548fff83c5030e09f1a2bfa3a75a30b98f05b099fe475541f1a380fb652fe\"" Jan 30 13:47:17.654339 containerd[1458]: time="2025-01-30T13:47:17.654120927Z" level=info msg="StartContainer for \"31d548fff83c5030e09f1a2bfa3a75a30b98f05b099fe475541f1a380fb652fe\"" Jan 30 13:47:17.662395 systemd[1]: Started cri-containerd-e11ec4ab110023e92fae4b190e6321df501a4e29d9eb829bcc8ae69537822336.scope - libcontainer container e11ec4ab110023e92fae4b190e6321df501a4e29d9eb829bcc8ae69537822336. Jan 30 13:47:17.683406 systemd[1]: Started cri-containerd-31d548fff83c5030e09f1a2bfa3a75a30b98f05b099fe475541f1a380fb652fe.scope - libcontainer container 31d548fff83c5030e09f1a2bfa3a75a30b98f05b099fe475541f1a380fb652fe. Jan 30 13:47:17.696600 containerd[1458]: time="2025-01-30T13:47:17.696526959Z" level=info msg="StartContainer for \"e11ec4ab110023e92fae4b190e6321df501a4e29d9eb829bcc8ae69537822336\" returns successfully" Jan 30 13:47:17.713382 containerd[1458]: time="2025-01-30T13:47:17.713252455Z" level=info msg="StartContainer for \"31d548fff83c5030e09f1a2bfa3a75a30b98f05b099fe475541f1a380fb652fe\" returns successfully" Jan 30 13:47:18.047591 kubelet[2585]: E0130 13:47:18.047501 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:18.049523 kubelet[2585]: E0130 13:47:18.049497 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:18.057914 kubelet[2585]: I0130 13:47:18.057823 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4kfff" podStartSLOduration=23.057803635 podStartE2EDuration="23.057803635s" podCreationTimestamp="2025-01-30 13:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:18.056434884 +0000 UTC m=+38.190017849" watchObservedRunningTime="2025-01-30 13:47:18.057803635 +0000 UTC m=+38.191386600" Jan 30 13:47:18.078836 kubelet[2585]: I0130 13:47:18.078612 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rzvt9" podStartSLOduration=23.078591707 podStartE2EDuration="23.078591707s" podCreationTimestamp="2025-01-30 13:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:18.078424222 +0000 UTC m=+38.212007187" watchObservedRunningTime="2025-01-30 13:47:18.078591707 +0000 UTC m=+38.212174673" Jan 30 13:47:18.177525 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:60038.service - OpenSSH per-connection server daemon (10.0.0.1:60038). Jan 30 13:47:18.211849 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 60038 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:18.213728 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:18.217568 systemd-logind[1448]: New session 10 of user core. Jan 30 13:47:18.227369 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:47:18.350615 sshd[4015]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:18.355410 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:60038.service: Deactivated successfully. Jan 30 13:47:18.357155 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:47:18.357966 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:47:18.358955 systemd-logind[1448]: Removed session 10. Jan 30 13:47:19.051144 kubelet[2585]: E0130 13:47:19.051101 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:19.051144 kubelet[2585]: E0130 13:47:19.051108 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:20.052103 kubelet[2585]: E0130 13:47:20.052059 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:20.052103 kubelet[2585]: E0130 13:47:20.052109 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:23.367255 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:60044.service - OpenSSH per-connection server daemon (10.0.0.1:60044). Jan 30 13:47:23.494956 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:23.496297 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:23.500711 systemd-logind[1448]: New session 11 of user core. Jan 30 13:47:23.511443 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:47:23.630122 sshd[4032]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:23.634333 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:60044.service: Deactivated successfully. Jan 30 13:47:23.636297 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:47:23.637033 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:47:23.637964 systemd-logind[1448]: Removed session 11. Jan 30 13:47:28.642754 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:40924.service - OpenSSH per-connection server daemon (10.0.0.1:40924). Jan 30 13:47:28.796801 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 40924 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:28.798306 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:28.801864 systemd-logind[1448]: New session 12 of user core. Jan 30 13:47:28.809503 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:47:28.955352 sshd[4050]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:28.965602 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:40924.service: Deactivated successfully. Jan 30 13:47:28.967786 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:47:28.970082 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:47:28.980482 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:40936.service - OpenSSH per-connection server daemon (10.0.0.1:40936). Jan 30 13:47:28.981504 systemd-logind[1448]: Removed session 12. Jan 30 13:47:29.006129 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 40936 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:29.007597 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:29.011473 systemd-logind[1448]: New session 13 of user core. Jan 30 13:47:29.027377 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:47:29.220925 sshd[4065]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:29.232254 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:40936.service: Deactivated successfully. Jan 30 13:47:29.234042 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:47:29.235767 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:47:29.246505 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:40950.service - OpenSSH per-connection server daemon (10.0.0.1:40950). Jan 30 13:47:29.247394 systemd-logind[1448]: Removed session 13. Jan 30 13:47:29.280362 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 40950 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:29.281982 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:29.286184 systemd-logind[1448]: New session 14 of user core. Jan 30 13:47:29.295364 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:47:29.472186 sshd[4077]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:29.476886 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:40950.service: Deactivated successfully. Jan 30 13:47:29.479700 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:47:29.480823 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:47:29.481837 systemd-logind[1448]: Removed session 14. Jan 30 13:47:34.483138 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:40954.service - OpenSSH per-connection server daemon (10.0.0.1:40954). Jan 30 13:47:34.512962 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 40954 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:34.514385 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:34.518217 systemd-logind[1448]: New session 15 of user core. Jan 30 13:47:34.528394 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:47:34.630661 sshd[4091]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:34.634350 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:40954.service: Deactivated successfully. Jan 30 13:47:34.636118 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:47:34.636694 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:47:34.637542 systemd-logind[1448]: Removed session 15. Jan 30 13:47:39.642278 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:56134.service - OpenSSH per-connection server daemon (10.0.0.1:56134). Jan 30 13:47:39.676425 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 56134 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:39.678499 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:39.682700 systemd-logind[1448]: New session 16 of user core. Jan 30 13:47:39.692433 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:47:39.795110 sshd[4105]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:39.806142 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:56134.service: Deactivated successfully. Jan 30 13:47:39.807987 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:47:39.809408 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:47:39.817479 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:56150.service - OpenSSH per-connection server daemon (10.0.0.1:56150). Jan 30 13:47:39.818363 systemd-logind[1448]: Removed session 16. Jan 30 13:47:39.843573 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 56150 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:39.845006 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:39.848822 systemd-logind[1448]: New session 17 of user core. Jan 30 13:47:39.859366 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:47:40.040158 sshd[4120]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:40.050616 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:56150.service: Deactivated successfully. Jan 30 13:47:40.053056 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:47:40.055206 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:47:40.061585 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:56160.service - OpenSSH per-connection server daemon (10.0.0.1:56160). Jan 30 13:47:40.062641 systemd-logind[1448]: Removed session 17. Jan 30 13:47:40.094570 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 56160 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:40.096003 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:40.099866 systemd-logind[1448]: New session 18 of user core. Jan 30 13:47:40.115373 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:47:41.441654 sshd[4135]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:41.455357 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:56160.service: Deactivated successfully. Jan 30 13:47:41.458247 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:47:41.460509 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:47:41.465563 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:56168.service - OpenSSH per-connection server daemon (10.0.0.1:56168). Jan 30 13:47:41.466971 systemd-logind[1448]: Removed session 18. Jan 30 13:47:41.495839 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 56168 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:41.497625 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:41.502151 systemd-logind[1448]: New session 19 of user core. Jan 30 13:47:41.516508 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:47:42.127793 sshd[4156]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:42.135213 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:56168.service: Deactivated successfully. Jan 30 13:47:42.136857 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:47:42.138454 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:47:42.144477 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:56178.service - OpenSSH per-connection server daemon (10.0.0.1:56178). Jan 30 13:47:42.145370 systemd-logind[1448]: Removed session 19. Jan 30 13:47:42.170217 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 56178 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:42.171822 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:42.175702 systemd-logind[1448]: New session 20 of user core. Jan 30 13:47:42.187402 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:47:42.293373 sshd[4169]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:42.298492 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:56178.service: Deactivated successfully. Jan 30 13:47:42.300596 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:47:42.301407 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:47:42.302180 systemd-logind[1448]: Removed session 20. Jan 30 13:47:47.305195 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:40674.service - OpenSSH per-connection server daemon (10.0.0.1:40674). Jan 30 13:47:47.335738 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 40674 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:47.337220 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:47.341087 systemd-logind[1448]: New session 21 of user core. Jan 30 13:47:47.350391 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:47:47.456272 sshd[4183]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:47.459344 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:40674.service: Deactivated successfully. Jan 30 13:47:47.461680 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:47:47.463270 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:47:47.464185 systemd-logind[1448]: Removed session 21. Jan 30 13:47:52.476458 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:40690.service - OpenSSH per-connection server daemon (10.0.0.1:40690). Jan 30 13:47:52.508092 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 40690 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:52.510047 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:52.514801 systemd-logind[1448]: New session 22 of user core. Jan 30 13:47:52.524502 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:47:52.625255 sshd[4200]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:52.629353 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:40690.service: Deactivated successfully. Jan 30 13:47:52.631289 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:47:52.632010 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:47:52.632965 systemd-logind[1448]: Removed session 22. Jan 30 13:47:56.945512 kubelet[2585]: E0130 13:47:56.945474 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:57.642389 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:60824.service - OpenSSH per-connection server daemon (10.0.0.1:60824). Jan 30 13:47:57.672986 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 60824 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:57.674400 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:57.678251 systemd-logind[1448]: New session 23 of user core. Jan 30 13:47:57.685378 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:47:57.788397 sshd[4216]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:57.792517 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:60824.service: Deactivated successfully. Jan 30 13:47:57.794478 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:47:57.795286 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:47:57.796105 systemd-logind[1448]: Removed session 23. Jan 30 13:48:02.805327 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:60832.service - OpenSSH per-connection server daemon (10.0.0.1:60832). Jan 30 13:48:02.835612 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 60832 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:02.837156 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:02.840853 systemd-logind[1448]: New session 24 of user core. Jan 30 13:48:02.847414 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:48:02.950079 sshd[4230]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:02.963456 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:60832.service: Deactivated successfully. Jan 30 13:48:02.965194 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:48:02.966711 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:48:02.971554 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:60844.service - OpenSSH per-connection server daemon (10.0.0.1:60844). Jan 30 13:48:02.972521 systemd-logind[1448]: Removed session 24. Jan 30 13:48:03.000169 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 60844 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:03.001760 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:03.005977 systemd-logind[1448]: New session 25 of user core. Jan 30 13:48:03.015411 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:48:04.364507 containerd[1458]: time="2025-01-30T13:48:04.364445338Z" level=info msg="StopContainer for \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\" with timeout 30 (s)" Jan 30 13:48:04.365055 containerd[1458]: time="2025-01-30T13:48:04.364876830Z" level=info msg="Stop container \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\" with signal terminated" Jan 30 13:48:04.380600 systemd[1]: cri-containerd-c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241.scope: Deactivated successfully. Jan 30 13:48:04.392727 containerd[1458]: time="2025-01-30T13:48:04.391655438Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:48:04.404269 containerd[1458]: time="2025-01-30T13:48:04.402493527Z" level=info msg="StopContainer for \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\" with timeout 2 (s)" Jan 30 13:48:04.404269 containerd[1458]: time="2025-01-30T13:48:04.402850628Z" level=info msg="Stop container \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\" with signal terminated" Jan 30 13:48:04.404031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241-rootfs.mount: Deactivated successfully. Jan 30 13:48:04.409122 systemd-networkd[1398]: lxc_health: Link DOWN Jan 30 13:48:04.409132 systemd-networkd[1398]: lxc_health: Lost carrier Jan 30 13:48:04.418889 containerd[1458]: time="2025-01-30T13:48:04.418817237Z" level=info msg="shim disconnected" id=c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241 namespace=k8s.io Jan 30 13:48:04.418889 containerd[1458]: time="2025-01-30T13:48:04.418873775Z" level=warning msg="cleaning up after shim disconnected" id=c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241 namespace=k8s.io Jan 30 13:48:04.418889 containerd[1458]: time="2025-01-30T13:48:04.418882912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:04.437159 systemd[1]: cri-containerd-2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348.scope: Deactivated successfully. Jan 30 13:48:04.437683 systemd[1]: cri-containerd-2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348.scope: Consumed 7.043s CPU time. Jan 30 13:48:04.438406 containerd[1458]: time="2025-01-30T13:48:04.438360005Z" level=info msg="StopContainer for \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\" returns successfully" Jan 30 13:48:04.442566 containerd[1458]: time="2025-01-30T13:48:04.442521252Z" level=info msg="StopPodSandbox for \"39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72\"" Jan 30 13:48:04.442620 containerd[1458]: time="2025-01-30T13:48:04.442572729Z" level=info msg="Container to stop \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:48:04.444865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72-shm.mount: Deactivated successfully. Jan 30 13:48:04.452637 systemd[1]: cri-containerd-39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72.scope: Deactivated successfully. Jan 30 13:48:04.459740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348-rootfs.mount: Deactivated successfully. Jan 30 13:48:04.467977 containerd[1458]: time="2025-01-30T13:48:04.467575932Z" level=info msg="shim disconnected" id=2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348 namespace=k8s.io Jan 30 13:48:04.467977 containerd[1458]: time="2025-01-30T13:48:04.467667816Z" level=warning msg="cleaning up after shim disconnected" id=2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348 namespace=k8s.io Jan 30 13:48:04.467977 containerd[1458]: time="2025-01-30T13:48:04.467685300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:04.476735 containerd[1458]: time="2025-01-30T13:48:04.476655456Z" level=info msg="shim disconnected" id=39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72 namespace=k8s.io Jan 30 13:48:04.477206 containerd[1458]: time="2025-01-30T13:48:04.477019610Z" level=warning msg="cleaning up after shim disconnected" id=39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72 namespace=k8s.io Jan 30 13:48:04.477206 containerd[1458]: time="2025-01-30T13:48:04.477040751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:04.488725 containerd[1458]: time="2025-01-30T13:48:04.488678264Z" level=info msg="StopContainer for \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\" returns successfully" Jan 30 13:48:04.489292 containerd[1458]: time="2025-01-30T13:48:04.489254012Z" level=info msg="StopPodSandbox for \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\"" Jan 30 13:48:04.489292 containerd[1458]: time="2025-01-30T13:48:04.489280542Z" level=info msg="Container to stop \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:48:04.489292 containerd[1458]: time="2025-01-30T13:48:04.489290842Z" level=info msg="Container to stop \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:48:04.489426 containerd[1458]: time="2025-01-30T13:48:04.489300701Z" level=info msg="Container to stop \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:48:04.489426 containerd[1458]: time="2025-01-30T13:48:04.489310169Z" level=info msg="Container to stop \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:48:04.489426 containerd[1458]: time="2025-01-30T13:48:04.489319367Z" level=info msg="Container to stop \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:48:04.496120 systemd[1]: cri-containerd-d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d.scope: Deactivated successfully. Jan 30 13:48:04.503860 containerd[1458]: time="2025-01-30T13:48:04.503785464Z" level=info msg="TearDown network for sandbox \"39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72\" successfully" Jan 30 13:48:04.503860 containerd[1458]: time="2025-01-30T13:48:04.503845809Z" level=info msg="StopPodSandbox for \"39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72\" returns successfully" Jan 30 13:48:04.523718 containerd[1458]: time="2025-01-30T13:48:04.523603849Z" level=info msg="shim disconnected" id=d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d namespace=k8s.io Jan 30 13:48:04.524058 containerd[1458]: time="2025-01-30T13:48:04.523819599Z" level=warning msg="cleaning up after shim disconnected" id=d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d namespace=k8s.io Jan 30 13:48:04.524058 containerd[1458]: time="2025-01-30T13:48:04.523836662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:04.541023 containerd[1458]: time="2025-01-30T13:48:04.540970467Z" level=info msg="TearDown network for sandbox \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" successfully" Jan 30 13:48:04.541023 containerd[1458]: time="2025-01-30T13:48:04.541012737Z" level=info msg="StopPodSandbox for \"d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d\" returns successfully" Jan 30 13:48:04.579575 kubelet[2585]: I0130 13:48:04.579509 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbff8\" (UniqueName: \"kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-kube-api-access-nbff8\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.579575 kubelet[2585]: I0130 13:48:04.579546 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-etc-cni-netd\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.579575 kubelet[2585]: I0130 13:48:04.579564 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-xtables-lock\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.579575 kubelet[2585]: I0130 13:48:04.579580 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd150c15-5296-4a77-9e00-243a2abad1ad-cilium-config-path\") pod \"bd150c15-5296-4a77-9e00-243a2abad1ad\" (UID: \"bd150c15-5296-4a77-9e00-243a2abad1ad\") " Jan 30 13:48:04.580212 kubelet[2585]: I0130 13:48:04.579594 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-cgroup\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580212 kubelet[2585]: I0130 13:48:04.579609 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hubble-tls\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580212 kubelet[2585]: I0130 13:48:04.579622 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hostproc\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580212 kubelet[2585]: I0130 13:48:04.579634 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-lib-modules\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580212 kubelet[2585]: I0130 13:48:04.579651 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-kernel\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580212 kubelet[2585]: I0130 13:48:04.579691 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-run\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580419 kubelet[2585]: I0130 13:48:04.579710 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-config-path\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580419 kubelet[2585]: I0130 13:48:04.579730 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-net\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580419 kubelet[2585]: I0130 13:48:04.579676 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.580419 kubelet[2585]: I0130 13:48:04.579754 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-clustermesh-secrets\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580419 kubelet[2585]: I0130 13:48:04.579775 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-bpf-maps\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580419 kubelet[2585]: I0130 13:48:04.579792 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cni-path\") pod \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\" (UID: \"a6786c32-44c6-44ae-bb6c-ec5d36f18d8d\") " Jan 30 13:48:04.580560 kubelet[2585]: I0130 13:48:04.579804 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.580560 kubelet[2585]: I0130 13:48:04.579810 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp6df\" (UniqueName: \"kubernetes.io/projected/bd150c15-5296-4a77-9e00-243a2abad1ad-kube-api-access-tp6df\") pod \"bd150c15-5296-4a77-9e00-243a2abad1ad\" (UID: \"bd150c15-5296-4a77-9e00-243a2abad1ad\") " Jan 30 13:48:04.580560 kubelet[2585]: I0130 13:48:04.579884 2585 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.580560 kubelet[2585]: I0130 13:48:04.579910 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.583349 kubelet[2585]: I0130 13:48:04.583326 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd150c15-5296-4a77-9e00-243a2abad1ad-kube-api-access-tp6df" (OuterVolumeSpecName: "kube-api-access-tp6df") pod "bd150c15-5296-4a77-9e00-243a2abad1ad" (UID: "bd150c15-5296-4a77-9e00-243a2abad1ad"). InnerVolumeSpecName "kube-api-access-tp6df". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:04.583546 kubelet[2585]: I0130 13:48:04.583326 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:48:04.583546 kubelet[2585]: I0130 13:48:04.583335 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd150c15-5296-4a77-9e00-243a2abad1ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd150c15-5296-4a77-9e00-243a2abad1ad" (UID: "bd150c15-5296-4a77-9e00-243a2abad1ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:48:04.583546 kubelet[2585]: I0130 13:48:04.583355 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.583546 kubelet[2585]: I0130 13:48:04.583464 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.583546 kubelet[2585]: I0130 13:48:04.583451 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.583691 kubelet[2585]: I0130 13:48:04.583497 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.583691 kubelet[2585]: I0130 13:48:04.583513 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hostproc" (OuterVolumeSpecName: "hostproc") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.583691 kubelet[2585]: I0130 13:48:04.583528 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.583691 kubelet[2585]: I0130 13:48:04.583481 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cni-path" (OuterVolumeSpecName: "cni-path") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:04.584491 kubelet[2585]: I0130 13:48:04.584450 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-kube-api-access-nbff8" (OuterVolumeSpecName: "kube-api-access-nbff8") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "kube-api-access-nbff8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:04.586283 kubelet[2585]: I0130 13:48:04.586259 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:04.586646 kubelet[2585]: I0130 13:48:04.586622 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" (UID: "a6786c32-44c6-44ae-bb6c-ec5d36f18d8d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680602 2585 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680646 2585 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680687 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680701 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680714 2585 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680724 2585 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680735 2585 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tp6df\" (UniqueName: \"kubernetes.io/projected/bd150c15-5296-4a77-9e00-243a2abad1ad-kube-api-access-tp6df\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.680739 kubelet[2585]: I0130 13:48:04.680746 2585 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.681016 kubelet[2585]: I0130 13:48:04.680758 2585 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.681016 kubelet[2585]: I0130 13:48:04.680769 2585 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nbff8\" (UniqueName: \"kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-kube-api-access-nbff8\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.681016 kubelet[2585]: I0130 13:48:04.680780 2585 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.681016 kubelet[2585]: I0130 13:48:04.680790 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd150c15-5296-4a77-9e00-243a2abad1ad-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.681016 kubelet[2585]: I0130 13:48:04.680800 2585 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.681016 kubelet[2585]: I0130 13:48:04.680810 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.681016 kubelet[2585]: I0130 13:48:04.680821 2585 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 13:48:04.944916 kubelet[2585]: E0130 13:48:04.944803 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:05.000125 kubelet[2585]: E0130 13:48:05.000070 2585 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:48:05.137871 kubelet[2585]: I0130 13:48:05.137840 2585 scope.go:117] "RemoveContainer" containerID="2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348" Jan 30 13:48:05.139052 containerd[1458]: time="2025-01-30T13:48:05.138959694Z" level=info msg="RemoveContainer for \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\"" Jan 30 13:48:05.143356 systemd[1]: Removed slice kubepods-burstable-poda6786c32_44c6_44ae_bb6c_ec5d36f18d8d.slice - libcontainer container kubepods-burstable-poda6786c32_44c6_44ae_bb6c_ec5d36f18d8d.slice. Jan 30 13:48:05.143638 systemd[1]: kubepods-burstable-poda6786c32_44c6_44ae_bb6c_ec5d36f18d8d.slice: Consumed 7.147s CPU time. Jan 30 13:48:05.145619 systemd[1]: Removed slice kubepods-besteffort-podbd150c15_5296_4a77_9e00_243a2abad1ad.slice - libcontainer container kubepods-besteffort-podbd150c15_5296_4a77_9e00_243a2abad1ad.slice. Jan 30 13:48:05.146731 containerd[1458]: time="2025-01-30T13:48:05.146699198Z" level=info msg="RemoveContainer for \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\" returns successfully" Jan 30 13:48:05.147001 kubelet[2585]: I0130 13:48:05.146890 2585 scope.go:117] "RemoveContainer" containerID="4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4" Jan 30 13:48:05.147892 containerd[1458]: time="2025-01-30T13:48:05.147862515Z" level=info msg="RemoveContainer for \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\"" Jan 30 13:48:05.151678 containerd[1458]: time="2025-01-30T13:48:05.151567078Z" level=info msg="RemoveContainer for \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\" returns successfully" Jan 30 13:48:05.151758 kubelet[2585]: I0130 13:48:05.151731 2585 scope.go:117] "RemoveContainer" containerID="db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67" Jan 30 13:48:05.152706 containerd[1458]: time="2025-01-30T13:48:05.152684918Z" level=info msg="RemoveContainer for \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\"" Jan 30 13:48:05.156509 containerd[1458]: time="2025-01-30T13:48:05.156472890Z" level=info msg="RemoveContainer for \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\" returns successfully" Jan 30 13:48:05.156699 kubelet[2585]: I0130 13:48:05.156660 2585 scope.go:117] "RemoveContainer" containerID="0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3" Jan 30 13:48:05.157607 containerd[1458]: time="2025-01-30T13:48:05.157581913Z" level=info msg="RemoveContainer for \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\"" Jan 30 13:48:05.161126 containerd[1458]: time="2025-01-30T13:48:05.161092184Z" level=info msg="RemoveContainer for \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\" returns successfully" Jan 30 13:48:05.161283 kubelet[2585]: I0130 13:48:05.161250 2585 scope.go:117] "RemoveContainer" containerID="22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f" Jan 30 13:48:05.162253 containerd[1458]: time="2025-01-30T13:48:05.162214814Z" level=info msg="RemoveContainer for \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\"" Jan 30 13:48:05.165505 containerd[1458]: time="2025-01-30T13:48:05.165486191Z" level=info msg="RemoveContainer for \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\" returns successfully" Jan 30 13:48:05.165629 kubelet[2585]: I0130 13:48:05.165608 2585 scope.go:117] "RemoveContainer" containerID="2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348" Jan 30 13:48:05.169454 containerd[1458]: time="2025-01-30T13:48:05.169406885Z" level=error msg="ContainerStatus for \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\": not found" Jan 30 13:48:05.169568 kubelet[2585]: E0130 13:48:05.169546 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\": not found" containerID="2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348" Jan 30 13:48:05.169647 kubelet[2585]: I0130 13:48:05.169578 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348"} err="failed to get container status \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ac40a19ccc4b65d62d2036f8424b164259aff665cc95938bb79966512c7f348\": not found" Jan 30 13:48:05.169680 kubelet[2585]: I0130 13:48:05.169648 2585 scope.go:117] "RemoveContainer" containerID="4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4" Jan 30 13:48:05.169839 containerd[1458]: time="2025-01-30T13:48:05.169810984Z" level=error msg="ContainerStatus for \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\": not found" Jan 30 13:48:05.169957 kubelet[2585]: E0130 13:48:05.169924 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\": not found" containerID="4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4" Jan 30 13:48:05.169994 kubelet[2585]: I0130 13:48:05.169952 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4"} err="failed to get container status \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4803cd5ddbf1451378aa23ef2be3ea38056e7f16bf15667c6c9f4cf1067c1cf4\": not found" Jan 30 13:48:05.169994 kubelet[2585]: I0130 13:48:05.169971 2585 scope.go:117] "RemoveContainer" containerID="db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67" Jan 30 13:48:05.170167 containerd[1458]: time="2025-01-30T13:48:05.170132808Z" level=error msg="ContainerStatus for \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\": not found" Jan 30 13:48:05.170279 kubelet[2585]: E0130 13:48:05.170257 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\": not found" containerID="db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67" Jan 30 13:48:05.170307 kubelet[2585]: I0130 13:48:05.170278 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67"} err="failed to get container status \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\": rpc error: code = NotFound desc = an error occurred when try to find container \"db50fb5af531077c03745f2b39e6391e8e8705b085b9d66d3451763be31dcc67\": not found" Jan 30 13:48:05.170307 kubelet[2585]: I0130 13:48:05.170294 2585 scope.go:117] "RemoveContainer" containerID="0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3" Jan 30 13:48:05.170457 containerd[1458]: time="2025-01-30T13:48:05.170427870Z" level=error msg="ContainerStatus for \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\": not found" Jan 30 13:48:05.170519 kubelet[2585]: E0130 13:48:05.170500 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\": not found" containerID="0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3" Jan 30 13:48:05.170573 kubelet[2585]: I0130 13:48:05.170514 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3"} err="failed to get container status \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0355e7cc7b7e757f8d3db6b6da7afd3a0842a53514d8c64f6a71e470184e5bc3\": not found" Jan 30 13:48:05.170573 kubelet[2585]: I0130 13:48:05.170530 2585 scope.go:117] "RemoveContainer" containerID="22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f" Jan 30 13:48:05.170693 containerd[1458]: time="2025-01-30T13:48:05.170657489Z" level=error msg="ContainerStatus for \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\": not found" Jan 30 13:48:05.170778 kubelet[2585]: E0130 13:48:05.170758 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\": not found" containerID="22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f" Jan 30 13:48:05.170812 kubelet[2585]: I0130 13:48:05.170779 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f"} err="failed to get container status \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\": rpc error: code = NotFound desc = an error occurred when try to find container \"22aac9367e7f2e139b7202b9b3c0924b2fdaf724c12868fcfa1dac0cf4dd001f\": not found" Jan 30 13:48:05.170812 kubelet[2585]: I0130 13:48:05.170793 2585 scope.go:117] "RemoveContainer" containerID="c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241" Jan 30 13:48:05.171521 containerd[1458]: time="2025-01-30T13:48:05.171498482Z" level=info msg="RemoveContainer for \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\"" Jan 30 13:48:05.174738 containerd[1458]: time="2025-01-30T13:48:05.174715283Z" level=info msg="RemoveContainer for \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\" returns successfully" Jan 30 13:48:05.174874 kubelet[2585]: I0130 13:48:05.174838 2585 scope.go:117] "RemoveContainer" containerID="c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241" Jan 30 13:48:05.175035 containerd[1458]: time="2025-01-30T13:48:05.174997762Z" level=error msg="ContainerStatus for \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\": not found" Jan 30 13:48:05.175101 kubelet[2585]: E0130 13:48:05.175080 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\": not found" containerID="c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241" Jan 30 13:48:05.175127 kubelet[2585]: I0130 13:48:05.175099 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241"} err="failed to get container status \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2edca0552ef1a165c4d126da613ea8ac2b2575dea9c2ef32209487467336241\": not found" Jan 30 13:48:05.377558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39b7ec74db4fae370045af9f99a6afc065b4da95c40c34069daa4253b8c5cd72-rootfs.mount: Deactivated successfully. Jan 30 13:48:05.377681 systemd[1]: var-lib-kubelet-pods-bd150c15\x2d5296\x2d4a77\x2d9e00\x2d243a2abad1ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtp6df.mount: Deactivated successfully. Jan 30 13:48:05.377762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d-rootfs.mount: Deactivated successfully. Jan 30 13:48:05.377838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d01ed1e0100cadf355b60bd53dd7c367d57bd29c1e57839d9b2c6c517900bd3d-shm.mount: Deactivated successfully. Jan 30 13:48:05.377910 systemd[1]: var-lib-kubelet-pods-a6786c32\x2d44c6\x2d44ae\x2dbb6c\x2dec5d36f18d8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbff8.mount: Deactivated successfully. Jan 30 13:48:05.377983 systemd[1]: var-lib-kubelet-pods-a6786c32\x2d44c6\x2d44ae\x2dbb6c\x2dec5d36f18d8d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:48:05.378060 systemd[1]: var-lib-kubelet-pods-a6786c32\x2d44c6\x2d44ae\x2dbb6c\x2dec5d36f18d8d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:48:05.947093 kubelet[2585]: I0130 13:48:05.947028 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" path="/var/lib/kubelet/pods/a6786c32-44c6-44ae-bb6c-ec5d36f18d8d/volumes" Jan 30 13:48:05.948166 kubelet[2585]: I0130 13:48:05.948143 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd150c15-5296-4a77-9e00-243a2abad1ad" path="/var/lib/kubelet/pods/bd150c15-5296-4a77-9e00-243a2abad1ad/volumes" Jan 30 13:48:06.328800 sshd[4244]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:06.340404 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:60844.service: Deactivated successfully. Jan 30 13:48:06.342532 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:48:06.344279 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:48:06.350782 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:60846.service - OpenSSH per-connection server daemon (10.0.0.1:60846). Jan 30 13:48:06.351821 systemd-logind[1448]: Removed session 25. Jan 30 13:48:06.376616 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 60846 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:06.378608 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:06.383126 systemd-logind[1448]: New session 26 of user core. Jan 30 13:48:06.393352 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:48:06.764593 sshd[4404]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:06.776642 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:60846.service: Deactivated successfully. Jan 30 13:48:06.779650 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:48:06.782284 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:48:06.790364 kubelet[2585]: I0130 13:48:06.790225 2585 topology_manager.go:215] "Topology Admit Handler" podUID="7f05b76f-40fa-410e-a09d-107ce5bfe084" podNamespace="kube-system" podName="cilium-v6rnn" Jan 30 13:48:06.790364 kubelet[2585]: E0130 13:48:06.790374 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" containerName="clean-cilium-state" Jan 30 13:48:06.790516 kubelet[2585]: E0130 13:48:06.790384 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" containerName="cilium-agent" Jan 30 13:48:06.790516 kubelet[2585]: E0130 13:48:06.790391 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" containerName="mount-bpf-fs" Jan 30 13:48:06.790516 kubelet[2585]: E0130 13:48:06.790397 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd150c15-5296-4a77-9e00-243a2abad1ad" containerName="cilium-operator" Jan 30 13:48:06.790516 kubelet[2585]: E0130 13:48:06.790403 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" containerName="mount-cgroup" Jan 30 13:48:06.790516 kubelet[2585]: E0130 13:48:06.790409 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" containerName="apply-sysctl-overwrites" Jan 30 13:48:06.790516 kubelet[2585]: I0130 13:48:06.790428 2585 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd150c15-5296-4a77-9e00-243a2abad1ad" containerName="cilium-operator" Jan 30 13:48:06.790516 kubelet[2585]: I0130 13:48:06.790435 2585 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6786c32-44c6-44ae-bb6c-ec5d36f18d8d" containerName="cilium-agent" Jan 30 13:48:06.793993 systemd[1]: Started sshd@27-10.0.0.67:22-10.0.0.1:60862.service - OpenSSH per-connection server daemon (10.0.0.1:60862). Jan 30 13:48:06.798039 systemd-logind[1448]: Removed session 26. Jan 30 13:48:06.804527 systemd[1]: Created slice kubepods-burstable-pod7f05b76f_40fa_410e_a09d_107ce5bfe084.slice - libcontainer container kubepods-burstable-pod7f05b76f_40fa_410e_a09d_107ce5bfe084.slice. Jan 30 13:48:06.842495 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 60862 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:06.844454 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:06.848836 systemd-logind[1448]: New session 27 of user core. Jan 30 13:48:06.857378 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:48:06.894778 kubelet[2585]: I0130 13:48:06.894692 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-etc-cni-netd\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.894778 kubelet[2585]: I0130 13:48:06.894748 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f05b76f-40fa-410e-a09d-107ce5bfe084-cilium-config-path\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.894778 kubelet[2585]: I0130 13:48:06.894768 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-cni-path\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.894778 kubelet[2585]: I0130 13:48:06.894785 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-host-proc-sys-kernel\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.894778 kubelet[2585]: I0130 13:48:06.894802 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-lib-modules\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895082 kubelet[2585]: I0130 13:48:06.894817 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-host-proc-sys-net\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895082 kubelet[2585]: I0130 13:48:06.894833 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-cilium-run\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895082 kubelet[2585]: I0130 13:48:06.894848 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f05b76f-40fa-410e-a09d-107ce5bfe084-hubble-tls\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895082 kubelet[2585]: I0130 13:48:06.894876 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-cilium-cgroup\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895082 kubelet[2585]: I0130 13:48:06.894944 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-hostproc\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895082 kubelet[2585]: I0130 13:48:06.894989 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f05b76f-40fa-410e-a09d-107ce5bfe084-clustermesh-secrets\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895298 kubelet[2585]: I0130 13:48:06.895008 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnz9k\" (UniqueName: \"kubernetes.io/projected/7f05b76f-40fa-410e-a09d-107ce5bfe084-kube-api-access-gnz9k\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895298 kubelet[2585]: I0130 13:48:06.895095 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-bpf-maps\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895298 kubelet[2585]: I0130 13:48:06.895142 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f05b76f-40fa-410e-a09d-107ce5bfe084-xtables-lock\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.895298 kubelet[2585]: I0130 13:48:06.895160 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7f05b76f-40fa-410e-a09d-107ce5bfe084-cilium-ipsec-secrets\") pod \"cilium-v6rnn\" (UID: \"7f05b76f-40fa-410e-a09d-107ce5bfe084\") " pod="kube-system/cilium-v6rnn" Jan 30 13:48:06.911977 sshd[4417]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:06.924068 systemd[1]: sshd@27-10.0.0.67:22-10.0.0.1:60862.service: Deactivated successfully. Jan 30 13:48:06.927092 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:48:06.929083 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:48:06.934530 systemd[1]: Started sshd@28-10.0.0.67:22-10.0.0.1:60878.service - OpenSSH per-connection server daemon (10.0.0.1:60878). Jan 30 13:48:06.935425 systemd-logind[1448]: Removed session 27. Jan 30 13:48:06.962199 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 60878 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:06.963855 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:06.968784 systemd-logind[1448]: New session 28 of user core. Jan 30 13:48:06.977393 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:48:07.111773 kubelet[2585]: E0130 13:48:07.111612 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:07.112657 containerd[1458]: time="2025-01-30T13:48:07.112455607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v6rnn,Uid:7f05b76f-40fa-410e-a09d-107ce5bfe084,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:07.135101 containerd[1458]: time="2025-01-30T13:48:07.134961747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:07.135101 containerd[1458]: time="2025-01-30T13:48:07.135048712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:07.135101 containerd[1458]: time="2025-01-30T13:48:07.135075954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:07.135300 containerd[1458]: time="2025-01-30T13:48:07.135192646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:07.155403 systemd[1]: Started cri-containerd-f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e.scope - libcontainer container f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e. Jan 30 13:48:07.177858 containerd[1458]: time="2025-01-30T13:48:07.177810254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v6rnn,Uid:7f05b76f-40fa-410e-a09d-107ce5bfe084,Namespace:kube-system,Attempt:0,} returns sandbox id \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\"" Jan 30 13:48:07.178553 kubelet[2585]: E0130 13:48:07.178504 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:07.181692 containerd[1458]: time="2025-01-30T13:48:07.181612086Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:48:07.196134 containerd[1458]: time="2025-01-30T13:48:07.196082829Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a\"" Jan 30 13:48:07.196711 containerd[1458]: time="2025-01-30T13:48:07.196655750Z" level=info msg="StartContainer for \"76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a\"" Jan 30 13:48:07.227362 systemd[1]: Started cri-containerd-76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a.scope - libcontainer container 76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a. Jan 30 13:48:07.253011 containerd[1458]: time="2025-01-30T13:48:07.252951771Z" level=info msg="StartContainer for \"76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a\" returns successfully" Jan 30 13:48:07.264531 systemd[1]: cri-containerd-76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a.scope: Deactivated successfully. Jan 30 13:48:07.298093 containerd[1458]: time="2025-01-30T13:48:07.298009376Z" level=info msg="shim disconnected" id=76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a namespace=k8s.io Jan 30 13:48:07.298093 containerd[1458]: time="2025-01-30T13:48:07.298072015Z" level=warning msg="cleaning up after shim disconnected" id=76997be59bcaaed60273e9b120ab6a7a4d47f81af943c0a8403a3bdecd6ae74a namespace=k8s.io Jan 30 13:48:07.298093 containerd[1458]: time="2025-01-30T13:48:07.298082566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:08.150169 kubelet[2585]: E0130 13:48:08.149828 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:08.163623 containerd[1458]: time="2025-01-30T13:48:08.163570975Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:48:08.179361 containerd[1458]: time="2025-01-30T13:48:08.179299222Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161\"" Jan 30 13:48:08.179917 containerd[1458]: time="2025-01-30T13:48:08.179858237Z" level=info msg="StartContainer for \"db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161\"" Jan 30 13:48:08.207371 systemd[1]: Started cri-containerd-db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161.scope - libcontainer container db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161. Jan 30 13:48:08.235002 containerd[1458]: time="2025-01-30T13:48:08.234961758Z" level=info msg="StartContainer for \"db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161\" returns successfully" Jan 30 13:48:08.241500 systemd[1]: cri-containerd-db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161.scope: Deactivated successfully. Jan 30 13:48:08.272733 containerd[1458]: time="2025-01-30T13:48:08.272647204Z" level=info msg="shim disconnected" id=db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161 namespace=k8s.io Jan 30 13:48:08.272733 containerd[1458]: time="2025-01-30T13:48:08.272726094Z" level=warning msg="cleaning up after shim disconnected" id=db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161 namespace=k8s.io Jan 30 13:48:08.272923 containerd[1458]: time="2025-01-30T13:48:08.272738026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:08.945409 kubelet[2585]: E0130 13:48:08.945372 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:08.945698 kubelet[2585]: E0130 13:48:08.945652 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:09.000846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db232f574afb118bd5eb4e5a0f4677fc0c1217f8c66fbaa66542190478936161-rootfs.mount: Deactivated successfully. Jan 30 13:48:09.156671 kubelet[2585]: E0130 13:48:09.156643 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:09.158156 containerd[1458]: time="2025-01-30T13:48:09.158111235Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:48:09.173686 containerd[1458]: time="2025-01-30T13:48:09.173622057Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a\"" Jan 30 13:48:09.174245 containerd[1458]: time="2025-01-30T13:48:09.174189457Z" level=info msg="StartContainer for \"63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a\"" Jan 30 13:48:09.203371 systemd[1]: Started cri-containerd-63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a.scope - libcontainer container 63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a. Jan 30 13:48:09.234262 containerd[1458]: time="2025-01-30T13:48:09.234198759Z" level=info msg="StartContainer for \"63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a\" returns successfully" Jan 30 13:48:09.234489 systemd[1]: cri-containerd-63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a.scope: Deactivated successfully. Jan 30 13:48:09.259364 containerd[1458]: time="2025-01-30T13:48:09.259295797Z" level=info msg="shim disconnected" id=63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a namespace=k8s.io Jan 30 13:48:09.259364 containerd[1458]: time="2025-01-30T13:48:09.259352516Z" level=warning msg="cleaning up after shim disconnected" id=63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a namespace=k8s.io Jan 30 13:48:09.259364 containerd[1458]: time="2025-01-30T13:48:09.259361563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:10.000854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63463847a7f196d8e6838736558fdf801a8923ffc9e5475713f76d69df74138a-rootfs.mount: Deactivated successfully. Jan 30 13:48:10.001464 kubelet[2585]: E0130 13:48:10.001434 2585 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:48:10.159383 kubelet[2585]: E0130 13:48:10.159263 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:10.161630 containerd[1458]: time="2025-01-30T13:48:10.161596563Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:48:10.217072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597051908.mount: Deactivated successfully. Jan 30 13:48:10.220373 containerd[1458]: time="2025-01-30T13:48:10.220335527Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc\"" Jan 30 13:48:10.221482 containerd[1458]: time="2025-01-30T13:48:10.220833805Z" level=info msg="StartContainer for \"1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc\"" Jan 30 13:48:10.261409 systemd[1]: Started cri-containerd-1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc.scope - libcontainer container 1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc. Jan 30 13:48:10.284582 systemd[1]: cri-containerd-1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc.scope: Deactivated successfully. Jan 30 13:48:10.287972 containerd[1458]: time="2025-01-30T13:48:10.287915713Z" level=info msg="StartContainer for \"1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc\" returns successfully" Jan 30 13:48:10.311369 containerd[1458]: time="2025-01-30T13:48:10.311295570Z" level=info msg="shim disconnected" id=1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc namespace=k8s.io Jan 30 13:48:10.311369 containerd[1458]: time="2025-01-30T13:48:10.311357427Z" level=warning msg="cleaning up after shim disconnected" id=1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc namespace=k8s.io Jan 30 13:48:10.311369 containerd[1458]: time="2025-01-30T13:48:10.311369791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:11.001588 systemd[1]: run-containerd-runc-k8s.io-1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc-runc.FvHPCH.mount: Deactivated successfully. Jan 30 13:48:11.001742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c8cf44b72d2f57521b270a8daf8f7ca85e70c9fa2ca527e3753d2b2c19e50bc-rootfs.mount: Deactivated successfully. Jan 30 13:48:11.163992 kubelet[2585]: E0130 13:48:11.163956 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:11.165979 containerd[1458]: time="2025-01-30T13:48:11.165921665Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:48:11.189870 containerd[1458]: time="2025-01-30T13:48:11.189814310Z" level=info msg="CreateContainer within sandbox \"f89fa1c0f8ef02541ef724f79c71a68588d49d023bfc97d79cdfab9ce6ce248e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e795f868fcad71c6500bcd3e03d71f3c6c06df57ce334dc60a0ff50cb8b5ec7\"" Jan 30 13:48:11.190398 containerd[1458]: time="2025-01-30T13:48:11.190357263Z" level=info msg="StartContainer for \"8e795f868fcad71c6500bcd3e03d71f3c6c06df57ce334dc60a0ff50cb8b5ec7\"" Jan 30 13:48:11.220389 systemd[1]: Started cri-containerd-8e795f868fcad71c6500bcd3e03d71f3c6c06df57ce334dc60a0ff50cb8b5ec7.scope - libcontainer container 8e795f868fcad71c6500bcd3e03d71f3c6c06df57ce334dc60a0ff50cb8b5ec7. Jan 30 13:48:11.252405 containerd[1458]: time="2025-01-30T13:48:11.252310343Z" level=info msg="StartContainer for \"8e795f868fcad71c6500bcd3e03d71f3c6c06df57ce334dc60a0ff50cb8b5ec7\" returns successfully" Jan 30 13:48:11.666291 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:48:12.001145 systemd[1]: run-containerd-runc-k8s.io-8e795f868fcad71c6500bcd3e03d71f3c6c06df57ce334dc60a0ff50cb8b5ec7-runc.RnZSCw.mount: Deactivated successfully. Jan 30 13:48:12.168069 kubelet[2585]: E0130 13:48:12.168046 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:12.180029 kubelet[2585]: I0130 13:48:12.179750 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v6rnn" podStartSLOduration=6.179736448 podStartE2EDuration="6.179736448s" podCreationTimestamp="2025-01-30 13:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:12.179460964 +0000 UTC m=+92.313043929" watchObservedRunningTime="2025-01-30 13:48:12.179736448 +0000 UTC m=+92.313319413" Jan 30 13:48:12.656431 kubelet[2585]: I0130 13:48:12.656381 2585 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:48:12Z","lastTransitionTime":"2025-01-30T13:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:48:13.169933 kubelet[2585]: E0130 13:48:13.169891 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:14.618652 systemd-networkd[1398]: lxc_health: Link UP Jan 30 13:48:14.630985 systemd-networkd[1398]: lxc_health: Gained carrier Jan 30 13:48:15.114293 kubelet[2585]: E0130 13:48:15.113664 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:15.174427 kubelet[2585]: E0130 13:48:15.173783 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:15.890477 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jan 30 13:48:16.175308 kubelet[2585]: E0130 13:48:16.175057 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:17.444157 systemd[1]: run-containerd-runc-k8s.io-8e795f868fcad71c6500bcd3e03d71f3c6c06df57ce334dc60a0ff50cb8b5ec7-runc.gvfQj5.mount: Deactivated successfully. Jan 30 13:48:20.944600 kubelet[2585]: E0130 13:48:20.944559 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:21.682996 sshd[4425]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:21.687104 systemd[1]: sshd@28-10.0.0.67:22-10.0.0.1:60878.service: Deactivated successfully. Jan 30 13:48:21.689148 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:48:21.689867 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:48:21.690697 systemd-logind[1448]: Removed session 28.