Jan 13 21:25:01.871878 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:25:01.871900 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:01.871911 kernel: BIOS-provided physical RAM map: Jan 13 21:25:01.871918 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:25:01.871924 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:25:01.871930 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:25:01.871937 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:25:01.871943 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:25:01.871949 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:25:01.871958 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:25:01.871964 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:25:01.871970 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:25:01.871976 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:25:01.871983 kernel: NX (Execute Disable) protection: active Jan 13 21:25:01.871990 kernel: APIC: Static calls initialized Jan 13 21:25:01.872000 kernel: SMBIOS 2.8 present. Jan 13 21:25:01.872006 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:25:01.872013 kernel: Hypervisor detected: KVM Jan 13 21:25:01.872020 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:25:01.872027 kernel: kvm-clock: using sched offset of 2215538366 cycles Jan 13 21:25:01.872034 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:25:01.872041 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:25:01.872048 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:25:01.872056 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:25:01.872062 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:25:01.872072 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:25:01.872079 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:25:01.872086 kernel: Using GB pages for direct mapping Jan 13 21:25:01.872093 kernel: ACPI: Early table checksum verification disabled Jan 13 21:25:01.872100 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:25:01.872107 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:01.872114 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:01.872121 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:01.872130 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:25:01.872137 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:01.872144 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:01.872151 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:01.872158 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:01.872165 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:25:01.872172 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:25:01.872182 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:25:01.872192 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:25:01.872199 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:25:01.872206 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:25:01.872213 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:25:01.872220 kernel: No NUMA configuration found Jan 13 21:25:01.872227 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:25:01.872234 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:25:01.872244 kernel: Zone ranges: Jan 13 21:25:01.872251 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:25:01.872258 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:25:01.872265 kernel: Normal empty Jan 13 21:25:01.872272 kernel: Movable zone start for each node Jan 13 21:25:01.872279 kernel: Early memory node ranges Jan 13 21:25:01.872286 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:25:01.872293 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:25:01.872300 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:25:01.872309 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:25:01.872316 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:25:01.872323 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:25:01.872331 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:25:01.872345 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:25:01.872352 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:25:01.872360 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:25:01.872367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:25:01.872374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:25:01.872383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:25:01.872390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:25:01.872417 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:25:01.872424 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:25:01.872432 kernel: TSC deadline timer available Jan 13 21:25:01.872439 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:25:01.872447 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:25:01.872454 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:25:01.872461 kernel: kvm-guest: setup PV sched yield Jan 13 21:25:01.872468 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:25:01.872479 kernel: Booting paravirtualized kernel on KVM Jan 13 21:25:01.872486 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:25:01.872494 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:25:01.872501 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:25:01.872508 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:25:01.872515 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:25:01.872523 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:25:01.872530 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:25:01.872538 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:01.872549 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:25:01.872556 kernel: random: crng init done Jan 13 21:25:01.872563 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:25:01.872570 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:25:01.872577 kernel: Fallback order for Node 0: 0 Jan 13 21:25:01.872584 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:25:01.872591 kernel: Policy zone: DMA32 Jan 13 21:25:01.872598 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:25:01.872608 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:25:01.872615 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:25:01.872622 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:25:01.872629 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:25:01.872636 kernel: Dynamic Preempt: voluntary Jan 13 21:25:01.872643 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:25:01.872651 kernel: rcu: RCU event tracing is enabled. Jan 13 21:25:01.872659 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:25:01.872666 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:25:01.872676 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:25:01.872683 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:25:01.872690 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:25:01.872697 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:25:01.872704 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:25:01.872711 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:25:01.872718 kernel: Console: colour VGA+ 80x25 Jan 13 21:25:01.872725 kernel: printk: console [ttyS0] enabled Jan 13 21:25:01.872732 kernel: ACPI: Core revision 20230628 Jan 13 21:25:01.872742 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:25:01.872749 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:25:01.872756 kernel: x2apic enabled Jan 13 21:25:01.872763 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:25:01.872770 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:25:01.872778 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:25:01.872785 kernel: kvm-guest: setup PV IPIs Jan 13 21:25:01.872802 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:25:01.872810 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:25:01.872817 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:25:01.872825 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:25:01.872832 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:25:01.872841 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:25:01.872849 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:25:01.872857 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:25:01.872864 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:25:01.872874 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:25:01.872882 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:25:01.872889 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:25:01.872897 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:25:01.872905 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:25:01.872913 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:25:01.872921 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:25:01.872929 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:25:01.872936 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:25:01.872947 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:25:01.872954 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:25:01.872962 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:25:01.872969 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:25:01.872977 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:25:01.872984 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:25:01.872992 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:25:01.872999 kernel: landlock: Up and running. Jan 13 21:25:01.873007 kernel: SELinux: Initializing. Jan 13 21:25:01.873017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:25:01.873024 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:25:01.873032 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:25:01.873040 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:01.873048 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:01.873055 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:01.873063 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:25:01.873070 kernel: ... version: 0 Jan 13 21:25:01.873080 kernel: ... bit width: 48 Jan 13 21:25:01.873088 kernel: ... generic registers: 6 Jan 13 21:25:01.873095 kernel: ... value mask: 0000ffffffffffff Jan 13 21:25:01.873103 kernel: ... max period: 00007fffffffffff Jan 13 21:25:01.873110 kernel: ... fixed-purpose events: 0 Jan 13 21:25:01.873118 kernel: ... event mask: 000000000000003f Jan 13 21:25:01.873125 kernel: signal: max sigframe size: 1776 Jan 13 21:25:01.873133 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:25:01.873140 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:25:01.873148 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:25:01.873157 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:25:01.873165 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:25:01.873172 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:25:01.873180 kernel: smpboot: Max logical packages: 1 Jan 13 21:25:01.873187 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:25:01.873195 kernel: devtmpfs: initialized Jan 13 21:25:01.873203 kernel: x86/mm: Memory block size: 128MB Jan 13 21:25:01.873213 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:25:01.873221 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:25:01.873233 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:25:01.873240 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:25:01.873248 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:25:01.873256 kernel: audit: type=2000 audit(1736803501.989:1): state=initialized audit_enabled=0 res=1 Jan 13 21:25:01.873263 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:25:01.873270 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:25:01.873278 kernel: cpuidle: using governor menu Jan 13 21:25:01.873285 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:25:01.873293 kernel: dca service started, version 1.12.1 Jan 13 21:25:01.873303 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:25:01.873311 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:25:01.873318 kernel: PCI: Using configuration type 1 for base access Jan 13 21:25:01.873326 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:25:01.873334 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:25:01.873348 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:25:01.873355 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:25:01.873363 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:25:01.873370 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:25:01.873380 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:25:01.873388 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:25:01.873395 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:25:01.873413 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:25:01.873430 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:25:01.873445 kernel: ACPI: Interpreter enabled Jan 13 21:25:01.873453 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:25:01.873460 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:25:01.873468 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:25:01.873478 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:25:01.873486 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:25:01.873493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:25:01.873677 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:25:01.873805 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:25:01.873924 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:25:01.873934 kernel: PCI host bridge to bus 0000:00 Jan 13 21:25:01.874060 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:25:01.874170 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:25:01.874279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:25:01.874410 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:25:01.874534 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:25:01.874643 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:25:01.874751 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:25:01.874892 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:25:01.875022 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:25:01.875142 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:25:01.875260 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:25:01.875388 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:25:01.875523 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:25:01.875656 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:25:01.875776 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:25:01.875895 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:25:01.876013 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:25:01.876139 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:25:01.876258 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:25:01.876384 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:25:01.876522 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:25:01.876656 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:25:01.876776 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:25:01.876895 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:25:01.877014 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:25:01.877132 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:25:01.877258 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:25:01.877390 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:25:01.877532 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:25:01.877651 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:25:01.877768 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:25:01.877893 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:25:01.878011 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:25:01.878021 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:25:01.878033 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:25:01.878040 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:25:01.878048 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:25:01.878055 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:25:01.878063 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:25:01.878070 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:25:01.878077 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:25:01.878085 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:25:01.878092 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:25:01.878102 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:25:01.878109 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:25:01.878117 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:25:01.878124 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:25:01.878131 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:25:01.878139 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:25:01.878146 kernel: iommu: Default domain type: Translated Jan 13 21:25:01.878153 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:25:01.878161 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:25:01.878170 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:25:01.878178 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:25:01.878185 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:25:01.878308 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:25:01.878450 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:25:01.878570 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:25:01.878580 kernel: vgaarb: loaded Jan 13 21:25:01.878588 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:25:01.878599 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:25:01.878606 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:25:01.878614 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:25:01.878621 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:25:01.878629 kernel: pnp: PnP ACPI init Jan 13 21:25:01.878755 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:25:01.878766 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:25:01.878774 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:25:01.878784 kernel: NET: Registered PF_INET protocol family Jan 13 21:25:01.878792 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:25:01.878799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:25:01.878807 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:25:01.878815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:25:01.878822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:25:01.878830 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:25:01.878837 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:25:01.878844 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:25:01.878854 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:25:01.878861 kernel: NET: Registered PF_XDP protocol family Jan 13 21:25:01.878971 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:25:01.879080 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:25:01.879189 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:25:01.879301 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:25:01.879441 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:25:01.879552 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:25:01.879565 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:25:01.879573 kernel: Initialise system trusted keyrings Jan 13 21:25:01.879580 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:25:01.879588 kernel: Key type asymmetric registered Jan 13 21:25:01.879595 kernel: Asymmetric key parser 'x509' registered Jan 13 21:25:01.879603 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:25:01.879626 kernel: io scheduler mq-deadline registered Jan 13 21:25:01.879633 kernel: io scheduler kyber registered Jan 13 21:25:01.879641 kernel: io scheduler bfq registered Jan 13 21:25:01.879651 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:25:01.879659 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:25:01.879667 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:25:01.879677 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:25:01.879685 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:25:01.879692 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:25:01.879700 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:25:01.879707 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:25:01.879715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:25:01.879725 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:25:01.879850 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:25:01.879964 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:25:01.880077 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:25:01 UTC (1736803501) Jan 13 21:25:01.880190 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:25:01.880202 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:25:01.880210 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:25:01.880219 kernel: Segment Routing with IPv6 Jan 13 21:25:01.880231 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:25:01.880239 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:25:01.880246 kernel: Key type dns_resolver registered Jan 13 21:25:01.880254 kernel: IPI shorthand broadcast: enabled Jan 13 21:25:01.880262 kernel: sched_clock: Marking stable (551003095, 104299110)->(699945678, -44643473) Jan 13 21:25:01.880269 kernel: registered taskstats version 1 Jan 13 21:25:01.880277 kernel: Loading compiled-in X.509 certificates Jan 13 21:25:01.880285 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:25:01.880292 kernel: Key type .fscrypt registered Jan 13 21:25:01.880302 kernel: Key type fscrypt-provisioning registered Jan 13 21:25:01.880310 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:25:01.880317 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:25:01.880325 kernel: ima: No architecture policies found Jan 13 21:25:01.880332 kernel: clk: Disabling unused clocks Jan 13 21:25:01.880349 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:25:01.880357 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:25:01.880365 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:25:01.880372 kernel: Run /init as init process Jan 13 21:25:01.880382 kernel: with arguments: Jan 13 21:25:01.880390 kernel: /init Jan 13 21:25:01.880452 kernel: with environment: Jan 13 21:25:01.880461 kernel: HOME=/ Jan 13 21:25:01.880468 kernel: TERM=linux Jan 13 21:25:01.880475 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:25:01.880485 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:25:01.880495 systemd[1]: Detected virtualization kvm. Jan 13 21:25:01.880516 systemd[1]: Detected architecture x86-64. Jan 13 21:25:01.880527 systemd[1]: Running in initrd. Jan 13 21:25:01.880538 systemd[1]: No hostname configured, using default hostname. Jan 13 21:25:01.880548 systemd[1]: Hostname set to . Jan 13 21:25:01.880560 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:25:01.880568 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:25:01.880576 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:01.880584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:01.880597 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:25:01.880620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:25:01.880635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:25:01.880646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:25:01.880660 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:25:01.880675 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:25:01.880686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:01.880698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:01.880709 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:25:01.880720 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:25:01.880731 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:25:01.880743 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:25:01.880754 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:25:01.880769 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:25:01.880780 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:25:01.880791 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:25:01.880800 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:01.880812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:01.880824 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:01.880835 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:25:01.880847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:25:01.880861 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:25:01.880872 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:25:01.880883 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:25:01.880894 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:25:01.880906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:25:01.880916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:01.880925 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:25:01.880933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:01.880943 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:25:01.880983 systemd-journald[193]: Collecting audit messages is disabled. Jan 13 21:25:01.881013 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:25:01.881027 systemd-journald[193]: Journal started Jan 13 21:25:01.881054 systemd-journald[193]: Runtime Journal (/run/log/journal/e638c309600c4717a5fe3dcda231706e) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:25:01.866426 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:25:01.901989 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:25:01.902004 kernel: Bridge firewalling registered Jan 13 21:25:01.892969 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:25:01.904673 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:25:01.905117 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:01.907443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:01.909793 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:01.925578 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:01.928643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:01.931539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:25:01.934563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:25:01.937498 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:01.942521 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:25:01.943940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:01.946260 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:01.967084 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:01.969624 dracut-cmdline[222]: dracut-dracut-053 Jan 13 21:25:01.970375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:25:01.974484 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:02.005080 systemd-resolved[236]: Positive Trust Anchors: Jan 13 21:25:02.005092 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:25:02.005122 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:25:02.007535 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 13 21:25:02.008606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:25:02.014709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:02.067430 kernel: SCSI subsystem initialized Jan 13 21:25:02.076436 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:25:02.087424 kernel: iscsi: registered transport (tcp) Jan 13 21:25:02.108443 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:25:02.108471 kernel: QLogic iSCSI HBA Driver Jan 13 21:25:02.154656 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:25:02.163592 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:25:02.188251 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:25:02.188289 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:25:02.188308 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:25:02.228428 kernel: raid6: avx2x4 gen() 30193 MB/s Jan 13 21:25:02.245423 kernel: raid6: avx2x2 gen() 30873 MB/s Jan 13 21:25:02.262497 kernel: raid6: avx2x1 gen() 25936 MB/s Jan 13 21:25:02.262517 kernel: raid6: using algorithm avx2x2 gen() 30873 MB/s Jan 13 21:25:02.280504 kernel: raid6: .... xor() 19876 MB/s, rmw enabled Jan 13 21:25:02.280526 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:25:02.300425 kernel: xor: automatically using best checksumming function avx Jan 13 21:25:02.452433 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:25:02.465360 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:25:02.477587 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:02.488978 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 13 21:25:02.493566 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:02.501535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:25:02.514084 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 13 21:25:02.544491 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:25:02.557528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:25:02.618864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:02.625584 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:25:02.641041 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:25:02.643017 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:25:02.643356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:02.643877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:25:02.652425 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:25:02.679226 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:25:02.679376 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:25:02.679389 kernel: GPT:9289727 != 19775487 Jan 13 21:25:02.679412 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:25:02.679428 kernel: GPT:9289727 != 19775487 Jan 13 21:25:02.679437 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:25:02.679447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:02.679458 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:25:02.660774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:25:02.680946 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:25:02.685367 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:25:02.686726 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:02.689639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:02.693723 kernel: libata version 3.00 loaded. Jan 13 21:25:02.692245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:25:02.692374 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:02.693885 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:02.702088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Jan 13 21:25:02.705055 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:25:02.705076 kernel: AES CTR mode by8 optimization enabled Jan 13 21:25:02.710422 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (470) Jan 13 21:25:02.712526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:02.719415 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:25:02.748931 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:25:02.748947 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:25:02.749098 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:25:02.749236 kernel: scsi host0: ahci Jan 13 21:25:02.749393 kernel: scsi host1: ahci Jan 13 21:25:02.749567 kernel: scsi host2: ahci Jan 13 21:25:02.749719 kernel: scsi host3: ahci Jan 13 21:25:02.749859 kernel: scsi host4: ahci Jan 13 21:25:02.749997 kernel: scsi host5: ahci Jan 13 21:25:02.750134 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:25:02.750146 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:25:02.750156 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:25:02.750167 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:25:02.750180 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:25:02.750190 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:25:02.730332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:25:02.755599 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:25:02.788210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:02.794894 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:25:02.798145 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:25:02.805557 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:25:02.819523 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:25:02.826474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:02.842426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:02.962102 disk-uuid[562]: Primary Header is updated. Jan 13 21:25:02.962102 disk-uuid[562]: Secondary Entries is updated. Jan 13 21:25:02.962102 disk-uuid[562]: Secondary Header is updated. Jan 13 21:25:02.966420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:02.970419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:03.059232 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:03.059294 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:25:03.059313 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:03.059324 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:03.060421 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:03.061425 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:03.062431 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:25:03.062444 kernel: ata3.00: applying bridge limits Jan 13 21:25:03.063422 kernel: ata3.00: configured for UDMA/100 Jan 13 21:25:03.065431 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:25:03.102431 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:25:03.116158 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:25:03.116179 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:25:03.985056 disk-uuid[573]: The operation has completed successfully. Jan 13 21:25:03.986817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:04.013529 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:25:04.013668 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:25:04.037517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:25:04.042468 sh[589]: Success Jan 13 21:25:04.054448 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:25:04.085878 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:25:04.094743 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:25:04.099184 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:25:04.126359 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:25:04.126390 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:04.126416 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:25:04.126435 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:25:04.127100 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:25:04.131804 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:25:04.132538 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:25:04.138568 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:25:04.140644 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:25:04.149758 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:04.149796 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:04.149808 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:04.152415 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:04.161436 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:25:04.163457 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:04.243823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:25:04.254567 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:25:04.284140 systemd-networkd[767]: lo: Link UP Jan 13 21:25:04.284152 systemd-networkd[767]: lo: Gained carrier Jan 13 21:25:04.290850 systemd-networkd[767]: Enumeration completed Jan 13 21:25:04.291034 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:25:04.291298 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:04.291303 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:25:04.292411 systemd-networkd[767]: eth0: Link UP Jan 13 21:25:04.292416 systemd-networkd[767]: eth0: Gained carrier Jan 13 21:25:04.292423 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:04.303812 systemd[1]: Reached target network.target - Network. Jan 13 21:25:04.308177 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:25:04.315482 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:25:04.317122 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:25:04.368178 ignition[772]: Ignition 2.19.0 Jan 13 21:25:04.368188 ignition[772]: Stage: fetch-offline Jan 13 21:25:04.368225 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:04.368235 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:04.368346 ignition[772]: parsed url from cmdline: "" Jan 13 21:25:04.368349 ignition[772]: no config URL provided Jan 13 21:25:04.368354 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:25:04.368363 ignition[772]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:25:04.368388 ignition[772]: op(1): [started] loading QEMU firmware config module Jan 13 21:25:04.368394 ignition[772]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:25:04.375409 ignition[772]: op(1): [finished] loading QEMU firmware config module Jan 13 21:25:04.414714 ignition[772]: parsing config with SHA512: d7d85aa5695ee1a1cb47cd15f4b45d3cea98c269ad1e6cc7c3274b2a1889d55cb180651263a77c7297eaa2cfbb9741e65cd38fcec6c2a36ea3cf22acf9d72cef Jan 13 21:25:04.419941 unknown[772]: fetched base config from "system" Jan 13 21:25:04.419953 unknown[772]: fetched user config from "qemu" Jan 13 21:25:04.420378 ignition[772]: fetch-offline: fetch-offline passed Jan 13 21:25:04.420461 ignition[772]: Ignition finished successfully Jan 13 21:25:04.422872 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:25:04.424758 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:25:04.434580 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:25:04.446087 ignition[783]: Ignition 2.19.0 Jan 13 21:25:04.446098 ignition[783]: Stage: kargs Jan 13 21:25:04.446251 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:04.446263 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:04.450148 ignition[783]: kargs: kargs passed Jan 13 21:25:04.450202 ignition[783]: Ignition finished successfully Jan 13 21:25:04.454701 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:25:04.468564 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:25:04.478609 ignition[792]: Ignition 2.19.0 Jan 13 21:25:04.478619 ignition[792]: Stage: disks Jan 13 21:25:04.478793 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:04.478805 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:04.482628 ignition[792]: disks: disks passed Jan 13 21:25:04.483321 ignition[792]: Ignition finished successfully Jan 13 21:25:04.486126 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:25:04.486387 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:25:04.489437 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:25:04.491736 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:25:04.491944 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:25:04.495648 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:25:04.507511 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:25:04.521611 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:25:04.528247 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:25:04.536519 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:25:04.623434 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:25:04.624490 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:25:04.626762 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:25:04.634521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:25:04.636509 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:25:04.637856 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:25:04.642763 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 13 21:25:04.637905 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:25:04.648565 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:04.648592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:04.648613 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:04.648627 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:04.637934 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:25:04.650677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:25:04.655635 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:25:04.671572 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:25:04.703194 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:25:04.708051 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:25:04.711888 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:25:04.715750 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:25:04.794654 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:25:04.809580 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:25:04.811307 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:25:04.821424 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:04.837074 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:25:04.844366 ignition[926]: INFO : Ignition 2.19.0 Jan 13 21:25:04.844366 ignition[926]: INFO : Stage: mount Jan 13 21:25:04.846272 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:04.846272 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:04.846272 ignition[926]: INFO : mount: mount passed Jan 13 21:25:04.846272 ignition[926]: INFO : Ignition finished successfully Jan 13 21:25:04.851780 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:25:04.870534 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:25:05.124732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:25:05.136557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:25:05.145431 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 13 21:25:05.145477 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:05.145493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:05.146894 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:05.149417 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:05.150694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:25:05.181533 ignition[956]: INFO : Ignition 2.19.0 Jan 13 21:25:05.181533 ignition[956]: INFO : Stage: files Jan 13 21:25:05.183849 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:05.183849 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:05.183849 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:25:05.188236 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:25:05.188236 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:25:05.188236 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:25:05.188236 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:25:05.188236 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:25:05.187010 unknown[956]: wrote ssh authorized keys file for user: core Jan 13 21:25:05.196290 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:25:05.196290 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:25:05.245227 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:25:05.353353 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:25:05.353353 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:25:05.357194 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:25:05.698582 systemd-networkd[767]: eth0: Gained IPv6LL Jan 13 21:25:05.843336 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:25:06.181788 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:25:06.181788 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:06.185651 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:25:06.620929 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:25:07.100061 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:07.100061 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:25:07.103815 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:25:07.106185 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:25:07.106185 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:25:07.106185 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 21:25:07.110581 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:25:07.112509 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:25:07.112509 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 21:25:07.112509 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:25:07.142052 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:25:07.152944 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:25:07.154675 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:25:07.154675 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:25:07.154675 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:25:07.154675 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:25:07.154675 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:25:07.154675 ignition[956]: INFO : files: files passed Jan 13 21:25:07.154675 ignition[956]: INFO : Ignition finished successfully Jan 13 21:25:07.156463 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:25:07.167549 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:25:07.169409 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:25:07.172710 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:25:07.172824 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:25:07.180141 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:25:07.183093 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:07.183093 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:07.186216 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:07.185746 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:25:07.187623 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:25:07.194530 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:25:07.220132 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:25:07.220279 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:25:07.222630 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:25:07.224707 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:25:07.224821 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:25:07.225594 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:25:07.246321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:25:07.258529 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:25:07.268935 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:07.270250 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:07.272456 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:25:07.274470 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:25:07.274580 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:25:07.276899 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:25:07.278470 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:25:07.280504 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:25:07.282554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:25:07.284564 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:25:07.286866 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:25:07.288856 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:25:07.291241 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:25:07.293207 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:25:07.295388 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:25:07.297157 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:25:07.297306 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:25:07.299611 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:07.301055 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:07.303126 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:25:07.303264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:07.305357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:25:07.305483 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:25:07.307860 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:25:07.307971 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:25:07.309836 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:25:07.311591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:25:07.311717 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:07.314215 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:25:07.316039 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:25:07.318014 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:25:07.318109 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:25:07.320019 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:25:07.320112 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:25:07.322131 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:25:07.322257 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:25:07.324201 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:25:07.324309 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:25:07.346601 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:25:07.349286 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:25:07.350183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:25:07.350302 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:07.352411 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:25:07.352563 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:25:07.357774 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:25:07.358062 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:25:07.363059 ignition[1010]: INFO : Ignition 2.19.0 Jan 13 21:25:07.363059 ignition[1010]: INFO : Stage: umount Jan 13 21:25:07.364938 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:07.364938 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:07.368204 ignition[1010]: INFO : umount: umount passed Jan 13 21:25:07.369084 ignition[1010]: INFO : Ignition finished successfully Jan 13 21:25:07.372446 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:25:07.372602 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:25:07.375743 systemd[1]: Stopped target network.target - Network. Jan 13 21:25:07.375833 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:25:07.375897 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:25:07.377566 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:25:07.377615 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:25:07.377935 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:25:07.377993 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:25:07.378316 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:25:07.378364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:25:07.384393 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:25:07.386598 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:25:07.389686 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:25:07.391448 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 13 21:25:07.393776 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:25:07.393908 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:25:07.396055 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:25:07.396097 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:07.412514 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:25:07.413484 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:25:07.413552 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:25:07.415861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:07.420306 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:25:07.420447 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:25:07.424732 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:25:07.424796 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:07.426042 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:25:07.426106 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:07.428168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:25:07.428225 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:07.431862 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:25:07.431985 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:25:07.437137 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:25:07.437324 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:07.439735 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:25:07.439786 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:07.441488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:25:07.441529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:07.443792 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:25:07.443838 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:25:07.446181 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:25:07.446246 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:25:07.448335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:25:07.448381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:07.457526 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:25:07.458763 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:25:07.458817 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:07.461148 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:25:07.461205 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:07.463478 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:25:07.463527 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:07.463615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:25:07.463659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:07.464801 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:25:07.464912 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:25:07.615990 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:25:07.616127 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:25:07.618306 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:25:07.618946 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:25:07.619008 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:25:07.632541 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:25:07.640949 systemd[1]: Switching root. Jan 13 21:25:07.669880 systemd-journald[193]: Journal stopped Jan 13 21:25:08.852695 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 13 21:25:08.852782 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:25:08.852803 kernel: SELinux: policy capability open_perms=1 Jan 13 21:25:08.852815 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:25:08.852826 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:25:08.852837 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:25:08.852849 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:25:08.852861 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:25:08.852872 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:25:08.852884 kernel: audit: type=1403 audit(1736803508.110:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:25:08.852906 systemd[1]: Successfully loaded SELinux policy in 45.398ms. Jan 13 21:25:08.852934 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.592ms. Jan 13 21:25:08.852947 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:25:08.852959 systemd[1]: Detected virtualization kvm. Jan 13 21:25:08.852972 systemd[1]: Detected architecture x86-64. Jan 13 21:25:08.852984 systemd[1]: Detected first boot. Jan 13 21:25:08.852996 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:25:08.853009 zram_generator::config[1055]: No configuration found. Jan 13 21:25:08.853025 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:25:08.853037 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:25:08.853049 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:25:08.853061 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:25:08.853074 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:25:08.853086 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:25:08.853099 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:25:08.853111 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:25:08.853126 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:25:08.853139 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:25:08.853158 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:25:08.853171 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:25:08.853183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:08.853196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:08.853208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:25:08.853220 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:25:08.853232 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:25:08.853248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:25:08.853261 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:25:08.853274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:08.853286 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:25:08.853298 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:25:08.853310 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:25:08.853323 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:25:08.853335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:08.853350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:25:08.853362 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:25:08.853375 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:25:08.853387 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:25:08.853417 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:25:08.853430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:08.853443 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:08.853457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:08.853469 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:25:08.853484 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:25:08.853497 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:25:08.853510 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:25:08.853522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:08.853534 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:25:08.853546 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:25:08.853558 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:25:08.853571 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:25:08.853583 systemd[1]: Reached target machines.target - Containers. Jan 13 21:25:08.853599 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:25:08.853611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:08.853623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:25:08.853635 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:25:08.853666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:08.853678 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:25:08.853690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:08.853702 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:25:08.853717 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:08.853730 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:25:08.853742 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:25:08.853754 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:25:08.853766 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:25:08.853778 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:25:08.853789 kernel: fuse: init (API version 7.39) Jan 13 21:25:08.853801 kernel: loop: module loaded Jan 13 21:25:08.853813 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:25:08.853828 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:25:08.853840 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:25:08.853852 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:25:08.853864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:25:08.853889 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:25:08.853902 systemd[1]: Stopped verity-setup.service. Jan 13 21:25:08.853914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:08.853926 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:25:08.853946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:25:08.853961 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:25:08.853974 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:25:08.853986 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:25:08.853998 kernel: ACPI: bus type drm_connector registered Jan 13 21:25:08.854012 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:25:08.854024 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:08.854036 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:25:08.854068 systemd-journald[1125]: Collecting audit messages is disabled. Jan 13 21:25:08.854090 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:25:08.854103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:08.854115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:08.854127 systemd-journald[1125]: Journal started Jan 13 21:25:08.854160 systemd-journald[1125]: Runtime Journal (/run/log/journal/e638c309600c4717a5fe3dcda231706e) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:25:08.611483 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:25:08.630994 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:25:08.631503 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:25:08.858448 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:25:08.859706 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:25:08.861275 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:25:08.861599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:25:08.863005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:08.863243 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:08.864986 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:25:08.865241 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:25:08.866753 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:08.867003 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:08.868691 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:08.870163 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:25:08.871728 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:25:08.888237 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:25:08.895522 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:25:08.898163 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:25:08.899348 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:25:08.899389 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:25:08.901778 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:25:08.904505 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:25:08.908253 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:25:08.909500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:08.912865 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:25:08.917126 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:25:08.918803 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:25:08.920104 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:25:08.923455 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:25:08.927094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:08.936985 systemd-journald[1125]: Time spent on flushing to /var/log/journal/e638c309600c4717a5fe3dcda231706e is 40.909ms for 954 entries. Jan 13 21:25:08.936985 systemd-journald[1125]: System Journal (/var/log/journal/e638c309600c4717a5fe3dcda231706e) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:25:08.997212 systemd-journald[1125]: Received client request to flush runtime journal. Jan 13 21:25:08.997262 kernel: loop0: detected capacity change from 0 to 205544 Jan 13 21:25:08.998025 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:25:08.942213 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:25:08.946854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:25:08.950165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:08.952940 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:25:08.954358 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:25:08.955910 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:25:08.957570 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:25:08.970091 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:25:08.977699 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:25:08.980939 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:25:08.982855 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:09.000871 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:25:09.005162 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:25:09.006692 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:25:09.009443 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:25:09.012504 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:25:09.014099 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 13 21:25:09.014118 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 13 21:25:09.021450 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:09.029671 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:25:09.046431 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:25:09.055113 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:25:09.066905 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:25:09.081460 kernel: loop3: detected capacity change from 0 to 205544 Jan 13 21:25:09.083750 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 13 21:25:09.083773 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 13 21:25:09.090476 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:09.095511 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:25:09.106427 kernel: loop5: detected capacity change from 0 to 142488 Jan 13 21:25:09.115178 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:25:09.116085 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 13 21:25:09.119960 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:25:09.120054 systemd[1]: Reloading... Jan 13 21:25:09.180450 zram_generator::config[1223]: No configuration found. Jan 13 21:25:09.259003 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:25:09.302270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:09.350874 systemd[1]: Reloading finished in 230 ms. Jan 13 21:25:09.399679 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:25:09.401201 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:25:09.415574 systemd[1]: Starting ensure-sysext.service... Jan 13 21:25:09.417338 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:25:09.424051 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:25:09.424063 systemd[1]: Reloading... Jan 13 21:25:09.441118 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:25:09.441507 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:25:09.442525 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:25:09.442834 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 21:25:09.442914 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 21:25:09.446424 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:25:09.446436 systemd-tmpfiles[1261]: Skipping /boot Jan 13 21:25:09.459990 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:25:09.460163 systemd-tmpfiles[1261]: Skipping /boot Jan 13 21:25:09.477423 zram_generator::config[1288]: No configuration found. Jan 13 21:25:09.645636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:09.695486 systemd[1]: Reloading finished in 271 ms. Jan 13 21:25:09.715724 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:09.733027 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:09.735711 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:25:09.737960 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:25:09.741557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:25:09.747004 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:25:09.754220 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:09.754487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:09.756161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:09.760711 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:09.764773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:09.766297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:09.768388 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:25:09.769579 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:09.771436 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:25:09.773735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:09.773917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:09.775773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:09.776365 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:09.778947 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:09.779187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:09.791296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:09.792581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:09.796737 augenrules[1354]: No rules Jan 13 21:25:09.800937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:09.805387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:09.810494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:09.811989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:09.812107 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:09.813685 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:25:09.815885 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:09.816566 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:25:09.817078 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:25:09.818211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:09.818378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:09.819137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:09.819301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:09.830581 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:09.830795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:09.837468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:25:09.840850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:09.841236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:09.855761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:09.858211 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:25:09.860552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:09.863037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:09.864300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:09.868879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:09.871551 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:25:09.872741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:25:09.872893 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:09.874336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:09.874740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:09.876511 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:25:09.876686 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:25:09.878570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:09.878745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:09.880755 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:09.880954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:09.886719 systemd[1]: Finished ensure-sysext.service. Jan 13 21:25:09.889922 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:25:09.890010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:25:09.894951 systemd-resolved[1330]: Positive Trust Anchors: Jan 13 21:25:09.895182 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:25:09.895224 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:25:09.895696 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:25:09.897957 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:25:09.900760 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 13 21:25:09.901660 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Jan 13 21:25:09.903377 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:25:09.904911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:09.921769 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:09.929728 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:25:09.958175 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:25:09.969449 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:25:09.969552 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:25:09.999458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1391) Jan 13 21:25:10.011633 systemd-networkd[1392]: lo: Link UP Jan 13 21:25:10.011644 systemd-networkd[1392]: lo: Gained carrier Jan 13 21:25:10.013503 systemd-networkd[1392]: Enumeration completed Jan 13 21:25:10.013595 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:25:10.015055 systemd[1]: Reached target network.target - Network. Jan 13 21:25:10.017891 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:10.017899 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:25:10.018604 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:10.018640 systemd-networkd[1392]: eth0: Link UP Jan 13 21:25:10.018644 systemd-networkd[1392]: eth0: Gained carrier Jan 13 21:25:10.018653 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:10.025028 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:25:10.034977 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:25:10.038558 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:25:10.038548 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:25:10.039437 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Jan 13 21:25:11.111075 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 13 21:25:11.111539 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:25:11.111607 systemd-timesyncd[1385]: Initial clock synchronization to Mon 2025-01-13 21:25:11.110981 UTC. Jan 13 21:25:11.127457 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:25:11.142411 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:25:11.143433 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:25:11.143647 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:25:11.151000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:25:11.164590 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:25:11.180089 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:25:11.189445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:11.191496 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:25:11.274373 kernel: kvm_amd: TSC scaling supported Jan 13 21:25:11.274448 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:25:11.274462 kernel: kvm_amd: Nested Paging enabled Jan 13 21:25:11.275516 kernel: kvm_amd: LBR virtualization supported Jan 13 21:25:11.275542 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:25:11.276677 kernel: kvm_amd: Virtual GIF supported Jan 13 21:25:11.297373 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:25:11.325417 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:25:11.351511 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:25:11.353219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:11.362721 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:25:11.395361 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:25:11.399248 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:11.400373 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:25:11.401522 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:25:11.402783 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:25:11.404208 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:25:11.405438 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:25:11.406676 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:25:11.407894 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:25:11.407923 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:25:11.408820 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:25:11.410272 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:25:11.412849 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:25:11.423430 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:25:11.426525 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:25:11.428434 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:25:11.429891 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:25:11.431091 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:25:11.432293 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:25:11.432343 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:25:11.433486 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:25:11.436069 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:25:11.436533 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:25:11.440513 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:25:11.444479 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:25:11.445631 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:25:11.446151 jq[1438]: false Jan 13 21:25:11.447599 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:25:11.451449 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:25:11.455215 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:25:11.482895 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:25:11.489535 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:25:11.491273 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:25:11.492007 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:25:11.493378 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:25:11.495719 extend-filesystems[1439]: Found loop3 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found loop4 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found loop5 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found sr0 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda1 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda2 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda3 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found usr Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda4 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda6 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda7 Jan 13 21:25:11.495719 extend-filesystems[1439]: Found vda9 Jan 13 21:25:11.495719 extend-filesystems[1439]: Checking size of /dev/vda9 Jan 13 21:25:11.497960 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:25:11.513379 dbus-daemon[1437]: [system] SELinux support is enabled Jan 13 21:25:11.501673 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:25:11.504757 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:25:11.525546 update_engine[1448]: I20250113 21:25:11.520815 1448 main.cc:92] Flatcar Update Engine starting Jan 13 21:25:11.525546 update_engine[1448]: I20250113 21:25:11.524744 1448 update_check_scheduler.cc:74] Next update check in 5m42s Jan 13 21:25:11.506916 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:25:11.525860 jq[1449]: true Jan 13 21:25:11.507993 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:25:11.508252 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:25:11.513624 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:25:11.522232 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:25:11.522289 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:25:11.525079 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:25:11.525102 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:25:11.535021 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:25:11.535438 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:25:11.545556 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:25:11.546088 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:25:11.549923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:25:11.553734 extend-filesystems[1439]: Resized partition /dev/vda9 Jan 13 21:25:11.556455 tar[1458]: linux-amd64/helm Jan 13 21:25:11.556726 jq[1465]: true Jan 13 21:25:11.562164 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:25:11.615350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1403) Jan 13 21:25:11.690985 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:25:11.724320 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:25:11.724363 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:25:11.726003 systemd-logind[1447]: New seat seat0. Jan 13 21:25:11.727486 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:25:11.790361 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:25:11.796528 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:25:11.854273 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:25:11.877586 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:25:11.916540 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:25:11.918595 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:38564.service - OpenSSH per-connection server daemon (10.0.0.1:38564). Jan 13 21:25:11.922748 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:25:11.922946 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:25:11.926324 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:25:11.958101 tar[1458]: linux-amd64/LICENSE Jan 13 21:25:11.958181 tar[1458]: linux-amd64/README.md Jan 13 21:25:11.970416 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:25:11.995621 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:25:12.011634 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:25:12.028185 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:25:12.029446 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:25:12.241399 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:25:12.321606 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:25:12.321606 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:25:12.321606 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:25:12.325660 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Jan 13 21:25:12.327204 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:25:12.327523 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:25:12.331084 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:25:12.332963 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:25:12.335151 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:25:12.384097 sshd[1507]: Connection closed by authenticating user core 10.0.0.1 port 38564 [preauth] Jan 13 21:25:12.387461 systemd[1]: sshd@0-10.0.0.102:22-10.0.0.1:38564.service: Deactivated successfully. Jan 13 21:25:12.405564 containerd[1466]: time="2025-01-13T21:25:12.405454449Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:25:12.428692 containerd[1466]: time="2025-01-13T21:25:12.428619482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:12.430705 containerd[1466]: time="2025-01-13T21:25:12.430656632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:12.430705 containerd[1466]: time="2025-01-13T21:25:12.430686618Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:25:12.430785 containerd[1466]: time="2025-01-13T21:25:12.430714310Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:25:12.430962 containerd[1466]: time="2025-01-13T21:25:12.430926689Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:25:12.430962 containerd[1466]: time="2025-01-13T21:25:12.430951976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:12.431102 containerd[1466]: time="2025-01-13T21:25:12.431059037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:12.431102 containerd[1466]: time="2025-01-13T21:25:12.431088282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:12.431353 containerd[1466]: time="2025-01-13T21:25:12.431306872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:12.431353 containerd[1466]: time="2025-01-13T21:25:12.431325567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:12.431429 containerd[1466]: time="2025-01-13T21:25:12.431358739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:12.431429 containerd[1466]: time="2025-01-13T21:25:12.431373276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:12.431517 containerd[1466]: time="2025-01-13T21:25:12.431490566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:12.432012 containerd[1466]: time="2025-01-13T21:25:12.431981787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:12.432319 containerd[1466]: time="2025-01-13T21:25:12.432285076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:12.432319 containerd[1466]: time="2025-01-13T21:25:12.432313329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:25:12.432484 containerd[1466]: time="2025-01-13T21:25:12.432447932Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:25:12.432522 containerd[1466]: time="2025-01-13T21:25:12.432514116Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:25:12.440463 containerd[1466]: time="2025-01-13T21:25:12.440437785Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:25:12.440499 containerd[1466]: time="2025-01-13T21:25:12.440474604Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:25:12.440499 containerd[1466]: time="2025-01-13T21:25:12.440494532Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:25:12.440534 containerd[1466]: time="2025-01-13T21:25:12.440509139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:25:12.440534 containerd[1466]: time="2025-01-13T21:25:12.440521542Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:25:12.440670 containerd[1466]: time="2025-01-13T21:25:12.440648651Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:25:12.440937 containerd[1466]: time="2025-01-13T21:25:12.440911123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:25:12.441068 containerd[1466]: time="2025-01-13T21:25:12.441049132Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:25:12.441089 containerd[1466]: time="2025-01-13T21:25:12.441066765Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:25:12.441089 containerd[1466]: time="2025-01-13T21:25:12.441078838Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:25:12.441127 containerd[1466]: time="2025-01-13T21:25:12.441091612Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441127 containerd[1466]: time="2025-01-13T21:25:12.441103774Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441127 containerd[1466]: time="2025-01-13T21:25:12.441114865Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441185 containerd[1466]: time="2025-01-13T21:25:12.441127058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441185 containerd[1466]: time="2025-01-13T21:25:12.441143479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441185 containerd[1466]: time="2025-01-13T21:25:12.441156924Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441185 containerd[1466]: time="2025-01-13T21:25:12.441168937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441185 containerd[1466]: time="2025-01-13T21:25:12.441179887Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:25:12.441268 containerd[1466]: time="2025-01-13T21:25:12.441199524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441268 containerd[1466]: time="2025-01-13T21:25:12.441212027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441268 containerd[1466]: time="2025-01-13T21:25:12.441224471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441268 containerd[1466]: time="2025-01-13T21:25:12.441235722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441268 containerd[1466]: time="2025-01-13T21:25:12.441246833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441268 containerd[1466]: time="2025-01-13T21:25:12.441259326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441270567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441281848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441293721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441307607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441317986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441328506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441360225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441374272Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:25:12.441398 containerd[1466]: time="2025-01-13T21:25:12.441392025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441555 containerd[1466]: time="2025-01-13T21:25:12.441403116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441555 containerd[1466]: time="2025-01-13T21:25:12.441412804Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:25:12.441555 containerd[1466]: time="2025-01-13T21:25:12.441459802Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:25:12.441555 containerd[1466]: time="2025-01-13T21:25:12.441484739Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:25:12.441555 containerd[1466]: time="2025-01-13T21:25:12.441508473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:25:12.441555 containerd[1466]: time="2025-01-13T21:25:12.441521388Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:25:12.441555 containerd[1466]: time="2025-01-13T21:25:12.441530134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441674 containerd[1466]: time="2025-01-13T21:25:12.441557195Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:25:12.441674 containerd[1466]: time="2025-01-13T21:25:12.441576681Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:25:12.441674 containerd[1466]: time="2025-01-13T21:25:12.441589525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:25:12.441879 containerd[1466]: time="2025-01-13T21:25:12.441824907Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:25:12.441879 containerd[1466]: time="2025-01-13T21:25:12.441876954Z" level=info msg="Connect containerd service" Jan 13 21:25:12.442037 containerd[1466]: time="2025-01-13T21:25:12.441908924Z" level=info msg="using legacy CRI server" Jan 13 21:25:12.442037 containerd[1466]: time="2025-01-13T21:25:12.441915667Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:25:12.442037 containerd[1466]: time="2025-01-13T21:25:12.441988674Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:25:12.442585 containerd[1466]: time="2025-01-13T21:25:12.442556970Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:25:12.442752 containerd[1466]: time="2025-01-13T21:25:12.442710568Z" level=info msg="Start subscribing containerd event" Jan 13 21:25:12.442773 containerd[1466]: time="2025-01-13T21:25:12.442758758Z" level=info msg="Start recovering state" Jan 13 21:25:12.442935 containerd[1466]: time="2025-01-13T21:25:12.442910513Z" level=info msg="Start event monitor" Jan 13 21:25:12.442957 containerd[1466]: time="2025-01-13T21:25:12.442917546Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:25:12.442957 containerd[1466]: time="2025-01-13T21:25:12.442946741Z" level=info msg="Start snapshots syncer" Jan 13 21:25:12.442992 containerd[1466]: time="2025-01-13T21:25:12.442960627Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:25:12.442992 containerd[1466]: time="2025-01-13T21:25:12.442969944Z" level=info msg="Start streaming server" Jan 13 21:25:12.443037 containerd[1466]: time="2025-01-13T21:25:12.442998948Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:25:12.443106 containerd[1466]: time="2025-01-13T21:25:12.443081613Z" level=info msg="containerd successfully booted in 0.039163s" Jan 13 21:25:12.443197 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:25:12.784544 systemd-networkd[1392]: eth0: Gained IPv6LL Jan 13 21:25:12.788176 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:25:12.790053 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:25:12.799620 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:25:12.802455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:12.804842 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:25:12.824751 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:25:12.825033 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:25:12.826839 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:25:12.827732 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:25:13.440542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:13.442297 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:25:13.443639 systemd[1]: Startup finished in 680ms (kernel) + 6.415s (initrd) + 4.306s (userspace) = 11.403s. Jan 13 21:25:13.456431 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:13.866887 kubelet[1555]: E0113 21:25:13.866759 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:13.870823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:13.871077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:22.396100 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:59678.service - OpenSSH per-connection server daemon (10.0.0.1:59678). Jan 13 21:25:22.435893 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 59678 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:22.438050 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:22.446410 systemd-logind[1447]: New session 1 of user core. Jan 13 21:25:22.447688 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:25:22.467530 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:25:22.481421 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:25:22.483209 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:25:22.504871 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:25:22.619603 systemd[1572]: Queued start job for default target default.target. Jan 13 21:25:22.628724 systemd[1572]: Created slice app.slice - User Application Slice. Jan 13 21:25:22.628762 systemd[1572]: Reached target paths.target - Paths. Jan 13 21:25:22.628776 systemd[1572]: Reached target timers.target - Timers. Jan 13 21:25:22.630358 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:25:22.641664 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:25:22.641799 systemd[1572]: Reached target sockets.target - Sockets. Jan 13 21:25:22.641816 systemd[1572]: Reached target basic.target - Basic System. Jan 13 21:25:22.641851 systemd[1572]: Reached target default.target - Main User Target. Jan 13 21:25:22.641883 systemd[1572]: Startup finished in 130ms. Jan 13 21:25:22.642398 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:25:22.644003 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:25:22.704062 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:59680.service - OpenSSH per-connection server daemon (10.0.0.1:59680). Jan 13 21:25:22.740768 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 59680 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:22.742993 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:22.749649 systemd-logind[1447]: New session 2 of user core. Jan 13 21:25:22.759526 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:25:22.815521 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:22.834962 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:59680.service: Deactivated successfully. Jan 13 21:25:22.836836 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:25:22.839003 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:25:22.845760 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:59688.service - OpenSSH per-connection server daemon (10.0.0.1:59688). Jan 13 21:25:22.846834 systemd-logind[1447]: Removed session 2. Jan 13 21:25:22.877935 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 59688 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:22.879642 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:22.883839 systemd-logind[1447]: New session 3 of user core. Jan 13 21:25:22.898483 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:25:22.948933 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:22.956240 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:59688.service: Deactivated successfully. Jan 13 21:25:22.958128 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:25:22.959650 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:25:22.961027 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:59698.service - OpenSSH per-connection server daemon (10.0.0.1:59698). Jan 13 21:25:22.962113 systemd-logind[1447]: Removed session 3. Jan 13 21:25:22.997257 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 59698 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:22.999025 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:23.003269 systemd-logind[1447]: New session 4 of user core. Jan 13 21:25:23.009471 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:25:23.063834 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:23.077284 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:59698.service: Deactivated successfully. Jan 13 21:25:23.079423 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:25:23.081059 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:25:23.082531 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:59704.service - OpenSSH per-connection server daemon (10.0.0.1:59704). Jan 13 21:25:23.083250 systemd-logind[1447]: Removed session 4. Jan 13 21:25:23.120209 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 59704 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:23.122115 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:23.126365 systemd-logind[1447]: New session 5 of user core. Jan 13 21:25:23.136468 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:25:23.193400 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:25:23.193752 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:23.207877 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:23.209630 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:23.232075 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:59704.service: Deactivated successfully. Jan 13 21:25:23.233603 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:25:23.235128 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:25:23.236355 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:59716.service - OpenSSH per-connection server daemon (10.0.0.1:59716). Jan 13 21:25:23.237193 systemd-logind[1447]: Removed session 5. Jan 13 21:25:23.270631 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 59716 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:23.272006 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:23.276004 systemd-logind[1447]: New session 6 of user core. Jan 13 21:25:23.282443 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:25:23.336622 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:25:23.337065 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:23.340923 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:23.348046 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:25:23.348460 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:23.367556 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:23.369096 auditctl[1619]: No rules Jan 13 21:25:23.370269 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:25:23.370529 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:23.372138 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:23.402352 augenrules[1637]: No rules Jan 13 21:25:23.403423 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:23.404854 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:23.406636 sshd[1612]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:23.417666 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:59716.service: Deactivated successfully. Jan 13 21:25:23.419543 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:25:23.420961 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:25:23.431634 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:59722.service - OpenSSH per-connection server daemon (10.0.0.1:59722). Jan 13 21:25:23.432711 systemd-logind[1447]: Removed session 6. Jan 13 21:25:23.463214 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 59722 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:23.465013 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:23.469550 systemd-logind[1447]: New session 7 of user core. Jan 13 21:25:23.483487 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:25:23.535611 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:25:23.535934 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:23.812560 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:25:23.812736 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:25:24.081399 dockerd[1666]: time="2025-01-13T21:25:24.081229585Z" level=info msg="Starting up" Jan 13 21:25:24.082415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:25:24.094556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:24.384654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:24.389215 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:24.750519 kubelet[1698]: E0113 21:25:24.750314 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:24.756771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:24.756971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:24.840629 dockerd[1666]: time="2025-01-13T21:25:24.840566783Z" level=info msg="Loading containers: start." Jan 13 21:25:24.957366 kernel: Initializing XFRM netlink socket Jan 13 21:25:25.037205 systemd-networkd[1392]: docker0: Link UP Jan 13 21:25:25.064798 dockerd[1666]: time="2025-01-13T21:25:25.064753443Z" level=info msg="Loading containers: done." Jan 13 21:25:25.077933 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2619539772-merged.mount: Deactivated successfully. Jan 13 21:25:25.122049 dockerd[1666]: time="2025-01-13T21:25:25.122016252Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:25:25.122395 dockerd[1666]: time="2025-01-13T21:25:25.122165231Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:25:25.122395 dockerd[1666]: time="2025-01-13T21:25:25.122276019Z" level=info msg="Daemon has completed initialization" Jan 13 21:25:25.760782 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:25:25.761703 dockerd[1666]: time="2025-01-13T21:25:25.760498153Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:25:26.521578 containerd[1466]: time="2025-01-13T21:25:26.521529469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:25:27.199370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213388827.mount: Deactivated successfully. Jan 13 21:25:28.054044 containerd[1466]: time="2025-01-13T21:25:28.053986973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:28.054798 containerd[1466]: time="2025-01-13T21:25:28.054760424Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Jan 13 21:25:28.056082 containerd[1466]: time="2025-01-13T21:25:28.056048580Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:28.059032 containerd[1466]: time="2025-01-13T21:25:28.058964298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:28.059983 containerd[1466]: time="2025-01-13T21:25:28.059951649Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 1.538375954s" Jan 13 21:25:28.059983 containerd[1466]: time="2025-01-13T21:25:28.059983028Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 21:25:28.061267 containerd[1466]: time="2025-01-13T21:25:28.061249784Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:25:29.209391 containerd[1466]: time="2025-01-13T21:25:29.209324043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:29.210003 containerd[1466]: time="2025-01-13T21:25:29.209964454Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Jan 13 21:25:29.211135 containerd[1466]: time="2025-01-13T21:25:29.211104261Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:29.213789 containerd[1466]: time="2025-01-13T21:25:29.213751486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:29.214784 containerd[1466]: time="2025-01-13T21:25:29.214743797Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.153469537s" Jan 13 21:25:29.214845 containerd[1466]: time="2025-01-13T21:25:29.214786827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 21:25:29.215271 containerd[1466]: time="2025-01-13T21:25:29.215249946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:25:30.660469 containerd[1466]: time="2025-01-13T21:25:30.660408214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:30.661360 containerd[1466]: time="2025-01-13T21:25:30.661295988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Jan 13 21:25:30.662556 containerd[1466]: time="2025-01-13T21:25:30.662510887Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:30.665271 containerd[1466]: time="2025-01-13T21:25:30.665234114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:30.666291 containerd[1466]: time="2025-01-13T21:25:30.666248566Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.450964416s" Jan 13 21:25:30.666350 containerd[1466]: time="2025-01-13T21:25:30.666286978Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 21:25:30.666829 containerd[1466]: time="2025-01-13T21:25:30.666786054Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:25:31.773388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481511269.mount: Deactivated successfully. Jan 13 21:25:32.494484 containerd[1466]: time="2025-01-13T21:25:32.494424328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:32.495529 containerd[1466]: time="2025-01-13T21:25:32.495472755Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 21:25:32.496664 containerd[1466]: time="2025-01-13T21:25:32.496615027Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:32.499297 containerd[1466]: time="2025-01-13T21:25:32.499257232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:32.501135 containerd[1466]: time="2025-01-13T21:25:32.500894352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.834057122s" Jan 13 21:25:32.501135 containerd[1466]: time="2025-01-13T21:25:32.500935349Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:25:32.501706 containerd[1466]: time="2025-01-13T21:25:32.501658866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:25:33.236573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53714544.mount: Deactivated successfully. Jan 13 21:25:33.890958 containerd[1466]: time="2025-01-13T21:25:33.890890882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:33.891844 containerd[1466]: time="2025-01-13T21:25:33.891802000Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:25:33.893269 containerd[1466]: time="2025-01-13T21:25:33.893223656Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:33.898413 containerd[1466]: time="2025-01-13T21:25:33.898369066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:33.899536 containerd[1466]: time="2025-01-13T21:25:33.899500438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.397735213s" Jan 13 21:25:33.899536 containerd[1466]: time="2025-01-13T21:25:33.899532097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:25:33.900139 containerd[1466]: time="2025-01-13T21:25:33.899970970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:25:34.422408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452029115.mount: Deactivated successfully. Jan 13 21:25:34.429289 containerd[1466]: time="2025-01-13T21:25:34.429248915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:34.429947 containerd[1466]: time="2025-01-13T21:25:34.429904955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 13 21:25:34.431022 containerd[1466]: time="2025-01-13T21:25:34.430983348Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:34.433198 containerd[1466]: time="2025-01-13T21:25:34.433157706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:34.433846 containerd[1466]: time="2025-01-13T21:25:34.433807634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.807419ms" Jan 13 21:25:34.433846 containerd[1466]: time="2025-01-13T21:25:34.433835437Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 21:25:34.434327 containerd[1466]: time="2025-01-13T21:25:34.434285260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:25:35.007280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:25:35.015538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:35.159972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:35.164823 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:35.196859 kubelet[1958]: E0113 21:25:35.196801 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:35.200658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:35.200852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:35.346013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349113338.mount: Deactivated successfully. Jan 13 21:25:38.252474 containerd[1466]: time="2025-01-13T21:25:38.252399417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:38.253373 containerd[1466]: time="2025-01-13T21:25:38.253271402Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 13 21:25:38.254839 containerd[1466]: time="2025-01-13T21:25:38.254801071Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:38.260352 containerd[1466]: time="2025-01-13T21:25:38.260294213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:38.261653 containerd[1466]: time="2025-01-13T21:25:38.261617514Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.827300294s" Jan 13 21:25:38.261694 containerd[1466]: time="2025-01-13T21:25:38.261654564Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 21:25:40.574219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:40.583619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:40.612005 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-7.scope)... Jan 13 21:25:40.612027 systemd[1]: Reloading... Jan 13 21:25:40.708510 zram_generator::config[2092]: No configuration found. Jan 13 21:25:41.118966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:41.198787 systemd[1]: Reloading finished in 586 ms. Jan 13 21:25:41.260844 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:25:41.260947 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:25:41.261235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:41.263929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:41.415298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:41.420271 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:41.460259 kubelet[2138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:41.460259 kubelet[2138]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:41.460259 kubelet[2138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:41.754637 kubelet[2138]: I0113 21:25:41.754421 2138 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:42.017038 kubelet[2138]: I0113 21:25:42.016885 2138 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:25:42.017038 kubelet[2138]: I0113 21:25:42.016914 2138 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:42.017191 kubelet[2138]: I0113 21:25:42.017167 2138 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:25:42.036917 kubelet[2138]: I0113 21:25:42.036847 2138 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:42.037181 kubelet[2138]: E0113 21:25:42.037048 2138 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:42.043018 kubelet[2138]: E0113 21:25:42.042966 2138 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:25:42.043018 kubelet[2138]: I0113 21:25:42.043004 2138 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:25:42.051080 kubelet[2138]: I0113 21:25:42.051034 2138 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:42.052315 kubelet[2138]: I0113 21:25:42.052275 2138 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:25:42.052542 kubelet[2138]: I0113 21:25:42.052489 2138 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:42.052714 kubelet[2138]: I0113 21:25:42.052527 2138 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:25:42.052714 kubelet[2138]: I0113 21:25:42.052705 2138 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:42.052714 kubelet[2138]: I0113 21:25:42.052715 2138 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:25:42.052876 kubelet[2138]: I0113 21:25:42.052843 2138 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:42.054107 kubelet[2138]: I0113 21:25:42.054075 2138 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:25:42.054107 kubelet[2138]: I0113 21:25:42.054098 2138 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:42.054188 kubelet[2138]: I0113 21:25:42.054136 2138 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:25:42.054188 kubelet[2138]: I0113 21:25:42.054147 2138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:42.059422 kubelet[2138]: I0113 21:25:42.059392 2138 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:42.059505 kubelet[2138]: W0113 21:25:42.059425 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:42.059505 kubelet[2138]: E0113 21:25:42.059495 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:42.060867 kubelet[2138]: I0113 21:25:42.060848 2138 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:42.061188 kubelet[2138]: W0113 21:25:42.061153 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:42.061231 kubelet[2138]: E0113 21:25:42.061198 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:42.061364 kubelet[2138]: W0113 21:25:42.061345 2138 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:25:42.062120 kubelet[2138]: I0113 21:25:42.061902 2138 server.go:1269] "Started kubelet" Jan 13 21:25:42.062505 kubelet[2138]: I0113 21:25:42.062455 2138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:42.062875 kubelet[2138]: I0113 21:25:42.062840 2138 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:42.062926 kubelet[2138]: I0113 21:25:42.062900 2138 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:42.063701 kubelet[2138]: I0113 21:25:42.063074 2138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:42.064998 kubelet[2138]: I0113 21:25:42.063791 2138 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:25:42.065470 kubelet[2138]: I0113 21:25:42.065448 2138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:25:42.066418 kubelet[2138]: I0113 21:25:42.065929 2138 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:25:42.066418 kubelet[2138]: I0113 21:25:42.066023 2138 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:25:42.066418 kubelet[2138]: I0113 21:25:42.066088 2138 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:42.066533 kubelet[2138]: W0113 21:25:42.066428 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:42.066533 kubelet[2138]: E0113 21:25:42.066472 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:42.067847 kubelet[2138]: E0113 21:25:42.067225 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:42.067847 kubelet[2138]: E0113 21:25:42.067308 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Jan 13 21:25:42.067847 kubelet[2138]: I0113 21:25:42.067748 2138 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:42.067847 kubelet[2138]: I0113 21:25:42.067833 2138 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:42.069062 kubelet[2138]: I0113 21:25:42.068824 2138 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:42.069287 kubelet[2138]: E0113 21:25:42.066777 2138 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d9e993d1b73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:25:42.061882227 +0000 UTC m=+0.637139505,LastTimestamp:2025-01-13 21:25:42.061882227 +0000 UTC m=+0.637139505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:25:42.078948 kubelet[2138]: I0113 21:25:42.078898 2138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:42.080114 kubelet[2138]: I0113 21:25:42.080075 2138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:42.080114 kubelet[2138]: I0113 21:25:42.080114 2138 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:42.080190 kubelet[2138]: I0113 21:25:42.080132 2138 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:25:42.080190 kubelet[2138]: E0113 21:25:42.080168 2138 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:42.081430 kubelet[2138]: W0113 21:25:42.081141 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:42.081430 kubelet[2138]: E0113 21:25:42.081186 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:42.082923 kubelet[2138]: I0113 21:25:42.082227 2138 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:42.082923 kubelet[2138]: I0113 21:25:42.082250 2138 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:42.082923 kubelet[2138]: I0113 21:25:42.082263 2138 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:42.087019 kubelet[2138]: I0113 21:25:42.086976 2138 policy_none.go:49] "None policy: Start" Jan 13 21:25:42.087663 kubelet[2138]: I0113 21:25:42.087644 2138 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:42.087711 kubelet[2138]: I0113 21:25:42.087667 2138 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:42.097343 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:25:42.110163 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:25:42.112912 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:25:42.125160 kubelet[2138]: I0113 21:25:42.125130 2138 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:42.125395 kubelet[2138]: I0113 21:25:42.125366 2138 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:25:42.125395 kubelet[2138]: I0113 21:25:42.125382 2138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:42.125593 kubelet[2138]: I0113 21:25:42.125575 2138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:42.126728 kubelet[2138]: E0113 21:25:42.126701 2138 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:25:42.187886 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 21:25:42.202085 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 21:25:42.215773 systemd[1]: Created slice kubepods-burstable-podb8664ae65d9bd2f7f8e13ccf1cb9a6bf.slice - libcontainer container kubepods-burstable-podb8664ae65d9bd2f7f8e13ccf1cb9a6bf.slice. Jan 13 21:25:42.227037 kubelet[2138]: I0113 21:25:42.227012 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:25:42.227346 kubelet[2138]: E0113 21:25:42.227311 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jan 13 21:25:42.267799 kubelet[2138]: I0113 21:25:42.267671 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8664ae65d9bd2f7f8e13ccf1cb9a6bf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8664ae65d9bd2f7f8e13ccf1cb9a6bf\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:42.267799 kubelet[2138]: I0113 21:25:42.267718 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:42.267799 kubelet[2138]: I0113 21:25:42.267739 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:42.267799 kubelet[2138]: I0113 21:25:42.267755 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:42.267799 kubelet[2138]: I0113 21:25:42.267773 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:42.267961 kubelet[2138]: I0113 21:25:42.267791 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:42.267961 kubelet[2138]: I0113 21:25:42.267807 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:25:42.267961 kubelet[2138]: E0113 21:25:42.267808 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Jan 13 21:25:42.267961 kubelet[2138]: I0113 21:25:42.267823 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8664ae65d9bd2f7f8e13ccf1cb9a6bf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8664ae65d9bd2f7f8e13ccf1cb9a6bf\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:42.267961 kubelet[2138]: I0113 21:25:42.267871 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8664ae65d9bd2f7f8e13ccf1cb9a6bf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8664ae65d9bd2f7f8e13ccf1cb9a6bf\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:42.428555 kubelet[2138]: I0113 21:25:42.428528 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:25:42.428846 kubelet[2138]: E0113 21:25:42.428812 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jan 13 21:25:42.499636 kubelet[2138]: E0113 21:25:42.499607 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:42.500255 containerd[1466]: time="2025-01-13T21:25:42.500190482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:42.504376 kubelet[2138]: E0113 21:25:42.504357 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:42.504682 containerd[1466]: time="2025-01-13T21:25:42.504652820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:42.517948 kubelet[2138]: E0113 21:25:42.517884 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:42.518188 containerd[1466]: time="2025-01-13T21:25:42.518161594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8664ae65d9bd2f7f8e13ccf1cb9a6bf,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:42.668710 kubelet[2138]: E0113 21:25:42.668660 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Jan 13 21:25:42.830587 kubelet[2138]: I0113 21:25:42.830458 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:25:42.830770 kubelet[2138]: E0113 21:25:42.830729 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jan 13 21:25:43.074358 kubelet[2138]: W0113 21:25:43.074267 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:43.074489 kubelet[2138]: E0113 21:25:43.074370 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:43.163988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428654145.mount: Deactivated successfully. Jan 13 21:25:43.171680 containerd[1466]: time="2025-01-13T21:25:43.171637935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:43.172631 containerd[1466]: time="2025-01-13T21:25:43.172600961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:43.173599 containerd[1466]: time="2025-01-13T21:25:43.173556553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:43.174545 containerd[1466]: time="2025-01-13T21:25:43.174512346Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:43.175518 containerd[1466]: time="2025-01-13T21:25:43.175460724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:43.176366 containerd[1466]: time="2025-01-13T21:25:43.176318914Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:43.177364 containerd[1466]: time="2025-01-13T21:25:43.177315222Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:25:43.179976 containerd[1466]: time="2025-01-13T21:25:43.179943812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:43.183131 containerd[1466]: time="2025-01-13T21:25:43.182877313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 682.593356ms" Jan 13 21:25:43.183704 containerd[1466]: time="2025-01-13T21:25:43.183678195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.460957ms" Jan 13 21:25:43.184391 containerd[1466]: time="2025-01-13T21:25:43.184353131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.642994ms" Jan 13 21:25:43.200004 kubelet[2138]: W0113 21:25:43.199929 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:43.200143 kubelet[2138]: E0113 21:25:43.200011 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:43.254957 kubelet[2138]: W0113 21:25:43.254885 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:43.254957 kubelet[2138]: E0113 21:25:43.254957 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:43.282968 kubelet[2138]: W0113 21:25:43.282904 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jan 13 21:25:43.283081 kubelet[2138]: E0113 21:25:43.282974 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:25:43.322056 containerd[1466]: time="2025-01-13T21:25:43.321935417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:43.322056 containerd[1466]: time="2025-01-13T21:25:43.322000980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:43.322056 containerd[1466]: time="2025-01-13T21:25:43.322015176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:43.322279 containerd[1466]: time="2025-01-13T21:25:43.321363865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:43.322279 containerd[1466]: time="2025-01-13T21:25:43.322099134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:43.323703 containerd[1466]: time="2025-01-13T21:25:43.323427635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:43.323703 containerd[1466]: time="2025-01-13T21:25:43.323445849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:43.323703 containerd[1466]: time="2025-01-13T21:25:43.323519077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:43.323703 containerd[1466]: time="2025-01-13T21:25:43.321179940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:43.323703 containerd[1466]: time="2025-01-13T21:25:43.323278365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:43.323703 containerd[1466]: time="2025-01-13T21:25:43.323300867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:43.324002 containerd[1466]: time="2025-01-13T21:25:43.323694726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:43.348500 systemd[1]: Started cri-containerd-492698fca7258e02f1b976e57eebb7e34e084b056b38b9547ee12fa6fec6c8af.scope - libcontainer container 492698fca7258e02f1b976e57eebb7e34e084b056b38b9547ee12fa6fec6c8af. Jan 13 21:25:43.350245 systemd[1]: Started cri-containerd-c7a22626ed0470dd881bba4ed3c52bc208895a7b0f1db24a62e9ef71d7c2d8a2.scope - libcontainer container c7a22626ed0470dd881bba4ed3c52bc208895a7b0f1db24a62e9ef71d7c2d8a2. Jan 13 21:25:43.352326 systemd[1]: Started cri-containerd-f6f44af88969d17960b7168ba83c2617ab0f7c8ddf2e5af2a2b655f78eb98e1a.scope - libcontainer container f6f44af88969d17960b7168ba83c2617ab0f7c8ddf2e5af2a2b655f78eb98e1a. Jan 13 21:25:43.388511 containerd[1466]: time="2025-01-13T21:25:43.388456387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8664ae65d9bd2f7f8e13ccf1cb9a6bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"492698fca7258e02f1b976e57eebb7e34e084b056b38b9547ee12fa6fec6c8af\"" Jan 13 21:25:43.389886 kubelet[2138]: E0113 21:25:43.389841 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:43.391965 containerd[1466]: time="2025-01-13T21:25:43.391936123Z" level=info msg="CreateContainer within sandbox \"492698fca7258e02f1b976e57eebb7e34e084b056b38b9547ee12fa6fec6c8af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:25:43.392870 containerd[1466]: time="2025-01-13T21:25:43.392810523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7a22626ed0470dd881bba4ed3c52bc208895a7b0f1db24a62e9ef71d7c2d8a2\"" Jan 13 21:25:43.393414 kubelet[2138]: E0113 21:25:43.393256 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:43.394458 containerd[1466]: time="2025-01-13T21:25:43.394429679Z" level=info msg="CreateContainer within sandbox \"c7a22626ed0470dd881bba4ed3c52bc208895a7b0f1db24a62e9ef71d7c2d8a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:25:43.395290 containerd[1466]: time="2025-01-13T21:25:43.395225482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6f44af88969d17960b7168ba83c2617ab0f7c8ddf2e5af2a2b655f78eb98e1a\"" Jan 13 21:25:43.396291 kubelet[2138]: E0113 21:25:43.396272 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:43.397953 containerd[1466]: time="2025-01-13T21:25:43.397921147Z" level=info msg="CreateContainer within sandbox \"f6f44af88969d17960b7168ba83c2617ab0f7c8ddf2e5af2a2b655f78eb98e1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:25:43.420257 containerd[1466]: time="2025-01-13T21:25:43.420101273Z" level=info msg="CreateContainer within sandbox \"492698fca7258e02f1b976e57eebb7e34e084b056b38b9547ee12fa6fec6c8af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"48e097448fb0316e063724e6f9358477cb0613e29c59dcf7a9a69913002874f6\"" Jan 13 21:25:43.420922 containerd[1466]: time="2025-01-13T21:25:43.420872970Z" level=info msg="StartContainer for \"48e097448fb0316e063724e6f9358477cb0613e29c59dcf7a9a69913002874f6\"" Jan 13 21:25:43.429375 containerd[1466]: time="2025-01-13T21:25:43.429323207Z" level=info msg="CreateContainer within sandbox \"c7a22626ed0470dd881bba4ed3c52bc208895a7b0f1db24a62e9ef71d7c2d8a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2430b800e7efcb328fd58673a4aec26bb2e2fc3b263cafa464a8a4c0673f6ca0\"" Jan 13 21:25:43.429873 containerd[1466]: time="2025-01-13T21:25:43.429849163Z" level=info msg="StartContainer for \"2430b800e7efcb328fd58673a4aec26bb2e2fc3b263cafa464a8a4c0673f6ca0\"" Jan 13 21:25:43.434231 containerd[1466]: time="2025-01-13T21:25:43.434170267Z" level=info msg="CreateContainer within sandbox \"f6f44af88969d17960b7168ba83c2617ab0f7c8ddf2e5af2a2b655f78eb98e1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b97340217ba11be63f4a5300bc88eb65708d67c7d0e1cb517a17c55db6dc68ba\"" Jan 13 21:25:43.434689 containerd[1466]: time="2025-01-13T21:25:43.434669884Z" level=info msg="StartContainer for \"b97340217ba11be63f4a5300bc88eb65708d67c7d0e1cb517a17c55db6dc68ba\"" Jan 13 21:25:43.450572 systemd[1]: Started cri-containerd-48e097448fb0316e063724e6f9358477cb0613e29c59dcf7a9a69913002874f6.scope - libcontainer container 48e097448fb0316e063724e6f9358477cb0613e29c59dcf7a9a69913002874f6. Jan 13 21:25:43.461478 systemd[1]: Started cri-containerd-2430b800e7efcb328fd58673a4aec26bb2e2fc3b263cafa464a8a4c0673f6ca0.scope - libcontainer container 2430b800e7efcb328fd58673a4aec26bb2e2fc3b263cafa464a8a4c0673f6ca0. Jan 13 21:25:43.464744 systemd[1]: Started cri-containerd-b97340217ba11be63f4a5300bc88eb65708d67c7d0e1cb517a17c55db6dc68ba.scope - libcontainer container b97340217ba11be63f4a5300bc88eb65708d67c7d0e1cb517a17c55db6dc68ba. Jan 13 21:25:43.469606 kubelet[2138]: E0113 21:25:43.469483 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Jan 13 21:25:43.500185 containerd[1466]: time="2025-01-13T21:25:43.500017063Z" level=info msg="StartContainer for \"48e097448fb0316e063724e6f9358477cb0613e29c59dcf7a9a69913002874f6\" returns successfully" Jan 13 21:25:43.510523 containerd[1466]: time="2025-01-13T21:25:43.510230046Z" level=info msg="StartContainer for \"2430b800e7efcb328fd58673a4aec26bb2e2fc3b263cafa464a8a4c0673f6ca0\" returns successfully" Jan 13 21:25:43.516641 containerd[1466]: time="2025-01-13T21:25:43.516604270Z" level=info msg="StartContainer for \"b97340217ba11be63f4a5300bc88eb65708d67c7d0e1cb517a17c55db6dc68ba\" returns successfully" Jan 13 21:25:43.632249 kubelet[2138]: I0113 21:25:43.632213 2138 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:25:44.089909 kubelet[2138]: E0113 21:25:44.088935 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:44.094564 kubelet[2138]: E0113 21:25:44.094525 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:44.094809 kubelet[2138]: E0113 21:25:44.094793 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:44.477418 kubelet[2138]: I0113 21:25:44.477250 2138 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:25:44.477418 kubelet[2138]: E0113 21:25:44.477280 2138 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 21:25:44.484605 kubelet[2138]: E0113 21:25:44.484560 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:44.585321 kubelet[2138]: E0113 21:25:44.585252 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:44.685933 kubelet[2138]: E0113 21:25:44.685871 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:44.786569 kubelet[2138]: E0113 21:25:44.786415 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:44.887132 kubelet[2138]: E0113 21:25:44.887070 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:44.987787 kubelet[2138]: E0113 21:25:44.987730 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:45.089253 kubelet[2138]: E0113 21:25:45.089017 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:45.096582 kubelet[2138]: E0113 21:25:45.096554 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:45.189415 kubelet[2138]: E0113 21:25:45.189352 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:45.290043 kubelet[2138]: E0113 21:25:45.289987 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:45.390865 kubelet[2138]: E0113 21:25:45.390724 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:45.491430 kubelet[2138]: E0113 21:25:45.491377 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:45.592458 kubelet[2138]: E0113 21:25:45.592410 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:45.693137 kubelet[2138]: E0113 21:25:45.693007 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:46.061582 kubelet[2138]: I0113 21:25:46.061416 2138 apiserver.go:52] "Watching apiserver" Jan 13 21:25:46.066563 kubelet[2138]: I0113 21:25:46.066514 2138 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:25:46.105947 kubelet[2138]: E0113 21:25:46.105903 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:46.859742 systemd[1]: Reloading requested from client PID 2415 ('systemctl') (unit session-7.scope)... Jan 13 21:25:46.859760 systemd[1]: Reloading... Jan 13 21:25:46.898751 kubelet[2138]: E0113 21:25:46.898707 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:46.941422 zram_generator::config[2457]: No configuration found. Jan 13 21:25:47.061368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:47.098726 kubelet[2138]: E0113 21:25:47.098654 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:47.098846 kubelet[2138]: E0113 21:25:47.098809 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:47.156163 systemd[1]: Reloading finished in 295 ms. Jan 13 21:25:47.201044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:47.221850 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:25:47.222149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:47.229790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:47.381924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:47.387399 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:47.421625 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:47.421625 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:47.421625 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:47.421625 kubelet[2499]: I0113 21:25:47.421593 2499 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:47.428149 kubelet[2499]: I0113 21:25:47.428102 2499 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:25:47.428149 kubelet[2499]: I0113 21:25:47.428133 2499 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:47.428447 kubelet[2499]: I0113 21:25:47.428423 2499 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:25:47.429659 kubelet[2499]: I0113 21:25:47.429637 2499 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:25:47.431479 kubelet[2499]: I0113 21:25:47.431451 2499 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:47.434494 kubelet[2499]: E0113 21:25:47.434453 2499 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:25:47.434544 kubelet[2499]: I0113 21:25:47.434495 2499 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:25:47.439372 kubelet[2499]: I0113 21:25:47.439353 2499 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:47.439572 kubelet[2499]: I0113 21:25:47.439541 2499 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:25:47.439706 kubelet[2499]: I0113 21:25:47.439678 2499 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:47.439845 kubelet[2499]: I0113 21:25:47.439704 2499 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:25:47.439920 kubelet[2499]: I0113 21:25:47.439851 2499 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:47.439920 kubelet[2499]: I0113 21:25:47.439860 2499 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:25:47.439920 kubelet[2499]: I0113 21:25:47.439894 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:47.440031 kubelet[2499]: I0113 21:25:47.439999 2499 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:25:47.440031 kubelet[2499]: I0113 21:25:47.440019 2499 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:47.440238 kubelet[2499]: I0113 21:25:47.440049 2499 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:25:47.440238 kubelet[2499]: I0113 21:25:47.440064 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:47.441467 kubelet[2499]: I0113 21:25:47.441443 2499 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:47.443951 kubelet[2499]: I0113 21:25:47.441992 2499 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:47.443951 kubelet[2499]: I0113 21:25:47.442555 2499 server.go:1269] "Started kubelet" Jan 13 21:25:47.445548 kubelet[2499]: I0113 21:25:47.445498 2499 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:47.446631 kubelet[2499]: I0113 21:25:47.446564 2499 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:25:47.447538 kubelet[2499]: I0113 21:25:47.447511 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:47.447768 kubelet[2499]: I0113 21:25:47.447719 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:47.447960 kubelet[2499]: I0113 21:25:47.447930 2499 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:47.448998 kubelet[2499]: I0113 21:25:47.448975 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:25:47.449828 kubelet[2499]: I0113 21:25:47.449806 2499 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:25:47.450041 kubelet[2499]: E0113 21:25:47.450018 2499 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:25:47.451322 kubelet[2499]: I0113 21:25:47.451292 2499 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:25:47.453373 kubelet[2499]: I0113 21:25:47.452500 2499 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:47.455806 kubelet[2499]: I0113 21:25:47.453806 2499 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:47.455806 kubelet[2499]: I0113 21:25:47.453907 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:47.457318 kubelet[2499]: E0113 21:25:47.456175 2499 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:25:47.457318 kubelet[2499]: I0113 21:25:47.456959 2499 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:47.465151 kubelet[2499]: I0113 21:25:47.465106 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:47.466677 kubelet[2499]: I0113 21:25:47.466538 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:47.466677 kubelet[2499]: I0113 21:25:47.466579 2499 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:47.466677 kubelet[2499]: I0113 21:25:47.466599 2499 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:25:47.466677 kubelet[2499]: E0113 21:25:47.466643 2499 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:47.495365 kubelet[2499]: I0113 21:25:47.495296 2499 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:47.495365 kubelet[2499]: I0113 21:25:47.495318 2499 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:47.495365 kubelet[2499]: I0113 21:25:47.495352 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:47.495542 kubelet[2499]: I0113 21:25:47.495489 2499 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:25:47.495542 kubelet[2499]: I0113 21:25:47.495500 2499 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:25:47.495542 kubelet[2499]: I0113 21:25:47.495519 2499 policy_none.go:49] "None policy: Start" Jan 13 21:25:47.495964 kubelet[2499]: I0113 21:25:47.495945 2499 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:47.495964 kubelet[2499]: I0113 21:25:47.495965 2499 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:47.496127 kubelet[2499]: I0113 21:25:47.496110 2499 state_mem.go:75] "Updated machine memory state" Jan 13 21:25:47.500290 kubelet[2499]: I0113 21:25:47.500254 2499 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:47.500556 kubelet[2499]: I0113 21:25:47.500522 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:25:47.500556 kubelet[2499]: I0113 21:25:47.500539 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:47.501037 kubelet[2499]: I0113 21:25:47.500741 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:47.606154 kubelet[2499]: I0113 21:25:47.606101 2499 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:25:47.654423 kubelet[2499]: I0113 21:25:47.654362 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:25:47.654423 kubelet[2499]: I0113 21:25:47.654416 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8664ae65d9bd2f7f8e13ccf1cb9a6bf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8664ae65d9bd2f7f8e13ccf1cb9a6bf\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:47.654604 kubelet[2499]: I0113 21:25:47.654444 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8664ae65d9bd2f7f8e13ccf1cb9a6bf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8664ae65d9bd2f7f8e13ccf1cb9a6bf\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:47.654604 kubelet[2499]: I0113 21:25:47.654461 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:47.654604 kubelet[2499]: I0113 21:25:47.654501 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:47.654604 kubelet[2499]: I0113 21:25:47.654542 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:47.654604 kubelet[2499]: I0113 21:25:47.654561 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:47.654723 kubelet[2499]: I0113 21:25:47.654578 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8664ae65d9bd2f7f8e13ccf1cb9a6bf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8664ae65d9bd2f7f8e13ccf1cb9a6bf\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:47.654723 kubelet[2499]: I0113 21:25:47.654600 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:47.765106 kubelet[2499]: E0113 21:25:47.765071 2499 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:25:47.765275 kubelet[2499]: E0113 21:25:47.765132 2499 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:47.765313 kubelet[2499]: E0113 21:25:47.765276 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:47.765363 kubelet[2499]: E0113 21:25:47.765279 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:47.781501 kubelet[2499]: I0113 21:25:47.781468 2499 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 21:25:47.781641 kubelet[2499]: I0113 21:25:47.781569 2499 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:25:47.861724 sudo[2535]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:25:47.862257 sudo[2535]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:25:48.015197 kubelet[2499]: E0113 21:25:48.015058 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:48.370663 sudo[2535]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:48.442782 kubelet[2499]: I0113 21:25:48.442732 2499 apiserver.go:52] "Watching apiserver" Jan 13 21:25:48.452420 kubelet[2499]: I0113 21:25:48.452371 2499 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:25:48.481446 kubelet[2499]: E0113 21:25:48.481011 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:48.487458 kubelet[2499]: E0113 21:25:48.487405 2499 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:25:48.487458 kubelet[2499]: E0113 21:25:48.487424 2499 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:48.487733 kubelet[2499]: E0113 21:25:48.487559 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:48.487733 kubelet[2499]: E0113 21:25:48.487606 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:48.501954 kubelet[2499]: I0113 21:25:48.501890 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.501857823 podStartE2EDuration="2.501857823s" podCreationTimestamp="2025-01-13 21:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:48.500354663 +0000 UTC m=+1.108889959" watchObservedRunningTime="2025-01-13 21:25:48.501857823 +0000 UTC m=+1.110393119" Jan 13 21:25:48.516465 kubelet[2499]: I0113 21:25:48.516395 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.516376975 podStartE2EDuration="1.516376975s" podCreationTimestamp="2025-01-13 21:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:48.515675791 +0000 UTC m=+1.124211108" watchObservedRunningTime="2025-01-13 21:25:48.516376975 +0000 UTC m=+1.124912291" Jan 13 21:25:48.719853 kubelet[2499]: I0113 21:25:48.719777 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.71975859 podStartE2EDuration="2.71975859s" podCreationTimestamp="2025-01-13 21:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:48.719721779 +0000 UTC m=+1.328257085" watchObservedRunningTime="2025-01-13 21:25:48.71975859 +0000 UTC m=+1.328293886" Jan 13 21:25:49.480969 kubelet[2499]: E0113 21:25:49.480934 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:49.481427 kubelet[2499]: E0113 21:25:49.481098 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:50.479038 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:50.481694 sshd[1645]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:50.482479 kubelet[2499]: E0113 21:25:50.482385 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:50.485349 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:59722.service: Deactivated successfully. Jan 13 21:25:50.487475 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:25:50.487681 systemd[1]: session-7.scope: Consumed 4.336s CPU time, 156.0M memory peak, 0B memory swap peak. Jan 13 21:25:50.489453 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:25:50.490661 systemd-logind[1447]: Removed session 7. Jan 13 21:25:51.372108 kubelet[2499]: I0113 21:25:51.372062 2499 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:25:51.372410 containerd[1466]: time="2025-01-13T21:25:51.372376569Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:25:51.372785 kubelet[2499]: I0113 21:25:51.372545 2499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:25:51.649646 kubelet[2499]: E0113 21:25:51.649524 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.413112 systemd[1]: Created slice kubepods-besteffort-pod7579735a_98d6_4347_b989_1cca6b449d3a.slice - libcontainer container kubepods-besteffort-pod7579735a_98d6_4347_b989_1cca6b449d3a.slice. Jan 13 21:25:52.426298 systemd[1]: Created slice kubepods-burstable-pod0a89f52d_034f_4239_ab0d_8327fda3fdbe.slice - libcontainer container kubepods-burstable-pod0a89f52d_034f_4239_ab0d_8327fda3fdbe.slice. Jan 13 21:25:52.482623 systemd[1]: Created slice kubepods-besteffort-pod07608cf7_4cb1_454d_b687_59338432ac98.slice - libcontainer container kubepods-besteffort-pod07608cf7_4cb1_454d_b687_59338432ac98.slice. Jan 13 21:25:52.486170 kubelet[2499]: I0113 21:25:52.486124 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pncj\" (UniqueName: \"kubernetes.io/projected/07608cf7-4cb1-454d-b687-59338432ac98-kube-api-access-7pncj\") pod \"cilium-operator-5d85765b45-sgc4c\" (UID: \"07608cf7-4cb1-454d-b687-59338432ac98\") " pod="kube-system/cilium-operator-5d85765b45-sgc4c" Jan 13 21:25:52.486263 kubelet[2499]: I0113 21:25:52.486234 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7579735a-98d6-4347-b989-1cca6b449d3a-xtables-lock\") pod \"kube-proxy-9ln6h\" (UID: \"7579735a-98d6-4347-b989-1cca6b449d3a\") " pod="kube-system/kube-proxy-9ln6h" Jan 13 21:25:52.486291 kubelet[2499]: I0113 21:25:52.486281 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-etc-cni-netd\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486358 kubelet[2499]: I0113 21:25:52.486305 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a89f52d-034f-4239-ab0d-8327fda3fdbe-clustermesh-secrets\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486396 kubelet[2499]: I0113 21:25:52.486373 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07608cf7-4cb1-454d-b687-59338432ac98-cilium-config-path\") pod \"cilium-operator-5d85765b45-sgc4c\" (UID: \"07608cf7-4cb1-454d-b687-59338432ac98\") " pod="kube-system/cilium-operator-5d85765b45-sgc4c" Jan 13 21:25:52.486422 kubelet[2499]: I0113 21:25:52.486401 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-xtables-lock\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486447 kubelet[2499]: I0113 21:25:52.486422 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-cgroup\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486529 kubelet[2499]: I0113 21:25:52.486491 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-run\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486560 kubelet[2499]: I0113 21:25:52.486536 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-kernel\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486560 kubelet[2499]: I0113 21:25:52.486555 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7579735a-98d6-4347-b989-1cca6b449d3a-lib-modules\") pod \"kube-proxy-9ln6h\" (UID: \"7579735a-98d6-4347-b989-1cca6b449d3a\") " pod="kube-system/kube-proxy-9ln6h" Jan 13 21:25:52.486608 kubelet[2499]: I0113 21:25:52.486575 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7579735a-98d6-4347-b989-1cca6b449d3a-kube-proxy\") pod \"kube-proxy-9ln6h\" (UID: \"7579735a-98d6-4347-b989-1cca6b449d3a\") " pod="kube-system/kube-proxy-9ln6h" Jan 13 21:25:52.486608 kubelet[2499]: I0113 21:25:52.486591 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjq67\" (UniqueName: \"kubernetes.io/projected/7579735a-98d6-4347-b989-1cca6b449d3a-kube-api-access-pjq67\") pod \"kube-proxy-9ln6h\" (UID: \"7579735a-98d6-4347-b989-1cca6b449d3a\") " pod="kube-system/kube-proxy-9ln6h" Jan 13 21:25:52.486654 kubelet[2499]: I0113 21:25:52.486611 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hostproc\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486654 kubelet[2499]: I0113 21:25:52.486629 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-net\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486654 kubelet[2499]: I0113 21:25:52.486643 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hubble-tls\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486725 kubelet[2499]: I0113 21:25:52.486681 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cni-path\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486725 kubelet[2499]: I0113 21:25:52.486697 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-lib-modules\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486725 kubelet[2499]: I0113 21:25:52.486711 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-config-path\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486789 kubelet[2499]: I0113 21:25:52.486730 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-bpf-maps\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.486789 kubelet[2499]: I0113 21:25:52.486749 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg2dz\" (UniqueName: \"kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-kube-api-access-rg2dz\") pod \"cilium-wdzcv\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " pod="kube-system/cilium-wdzcv" Jan 13 21:25:52.720755 kubelet[2499]: E0113 21:25:52.720718 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.721439 containerd[1466]: time="2025-01-13T21:25:52.721381662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ln6h,Uid:7579735a-98d6-4347-b989-1cca6b449d3a,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:52.729709 kubelet[2499]: E0113 21:25:52.729654 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.730283 containerd[1466]: time="2025-01-13T21:25:52.730234775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdzcv,Uid:0a89f52d-034f-4239-ab0d-8327fda3fdbe,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:52.786032 kubelet[2499]: E0113 21:25:52.786001 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.786682 containerd[1466]: time="2025-01-13T21:25:52.786636519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sgc4c,Uid:07608cf7-4cb1-454d-b687-59338432ac98,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:53.003894 containerd[1466]: time="2025-01-13T21:25:53.003302330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:53.003894 containerd[1466]: time="2025-01-13T21:25:53.003390879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:53.003894 containerd[1466]: time="2025-01-13T21:25:53.003403593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:53.003894 containerd[1466]: time="2025-01-13T21:25:53.003488685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:53.006660 containerd[1466]: time="2025-01-13T21:25:53.004229556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:53.006660 containerd[1466]: time="2025-01-13T21:25:53.004274842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:53.006660 containerd[1466]: time="2025-01-13T21:25:53.004289862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:53.006660 containerd[1466]: time="2025-01-13T21:25:53.004417003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:53.010460 containerd[1466]: time="2025-01-13T21:25:53.010386876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:53.010530 containerd[1466]: time="2025-01-13T21:25:53.010460005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:53.010530 containerd[1466]: time="2025-01-13T21:25:53.010475775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:53.010613 containerd[1466]: time="2025-01-13T21:25:53.010570616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:53.031478 systemd[1]: Started cri-containerd-d280beb8a0d7b10f47d74517b9ebbec98da915e5232030f27a505a4ad6bcf5cf.scope - libcontainer container d280beb8a0d7b10f47d74517b9ebbec98da915e5232030f27a505a4ad6bcf5cf. Jan 13 21:25:53.033193 systemd[1]: Started cri-containerd-d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60.scope - libcontainer container d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60. Jan 13 21:25:53.037195 systemd[1]: Started cri-containerd-39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab.scope - libcontainer container 39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab. Jan 13 21:25:53.064105 containerd[1466]: time="2025-01-13T21:25:53.063987453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdzcv,Uid:0a89f52d-034f-4239-ab0d-8327fda3fdbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\"" Jan 13 21:25:53.065293 kubelet[2499]: E0113 21:25:53.065072 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:53.065805 containerd[1466]: time="2025-01-13T21:25:53.065738308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ln6h,Uid:7579735a-98d6-4347-b989-1cca6b449d3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d280beb8a0d7b10f47d74517b9ebbec98da915e5232030f27a505a4ad6bcf5cf\"" Jan 13 21:25:53.067710 kubelet[2499]: E0113 21:25:53.067684 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:53.071092 containerd[1466]: time="2025-01-13T21:25:53.071057750Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:25:53.073460 containerd[1466]: time="2025-01-13T21:25:53.073434308Z" level=info msg="CreateContainer within sandbox \"d280beb8a0d7b10f47d74517b9ebbec98da915e5232030f27a505a4ad6bcf5cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:25:53.082716 containerd[1466]: time="2025-01-13T21:25:53.082652638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sgc4c,Uid:07608cf7-4cb1-454d-b687-59338432ac98,Namespace:kube-system,Attempt:0,} returns sandbox id \"39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab\"" Jan 13 21:25:53.083303 kubelet[2499]: E0113 21:25:53.083275 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:53.120519 containerd[1466]: time="2025-01-13T21:25:53.120463196Z" level=info msg="CreateContainer within sandbox \"d280beb8a0d7b10f47d74517b9ebbec98da915e5232030f27a505a4ad6bcf5cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d9ee36a5ad9e762a2304ebcdc39c3f3adee60f3460fc1eb7f99b35df4adbc8db\"" Jan 13 21:25:53.121662 containerd[1466]: time="2025-01-13T21:25:53.120842669Z" level=info msg="StartContainer for \"d9ee36a5ad9e762a2304ebcdc39c3f3adee60f3460fc1eb7f99b35df4adbc8db\"" Jan 13 21:25:53.147457 systemd[1]: Started cri-containerd-d9ee36a5ad9e762a2304ebcdc39c3f3adee60f3460fc1eb7f99b35df4adbc8db.scope - libcontainer container d9ee36a5ad9e762a2304ebcdc39c3f3adee60f3460fc1eb7f99b35df4adbc8db. Jan 13 21:25:53.175486 containerd[1466]: time="2025-01-13T21:25:53.175450364Z" level=info msg="StartContainer for \"d9ee36a5ad9e762a2304ebcdc39c3f3adee60f3460fc1eb7f99b35df4adbc8db\" returns successfully" Jan 13 21:25:53.492956 kubelet[2499]: E0113 21:25:53.490264 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:56.950156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636744235.mount: Deactivated successfully. Jan 13 21:25:57.044706 update_engine[1448]: I20250113 21:25:57.044621 1448 update_attempter.cc:509] Updating boot flags... Jan 13 21:25:57.412411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2886) Jan 13 21:25:57.457432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2886) Jan 13 21:25:58.166624 kubelet[2499]: E0113 21:25:58.166587 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:58.173721 kubelet[2499]: I0113 21:25:58.173669 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9ln6h" podStartSLOduration=6.173653356 podStartE2EDuration="6.173653356s" podCreationTimestamp="2025-01-13 21:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:53.502760073 +0000 UTC m=+6.111295369" watchObservedRunningTime="2025-01-13 21:25:58.173653356 +0000 UTC m=+10.782188652" Jan 13 21:25:59.297643 kubelet[2499]: E0113 21:25:59.297579 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:01.652953 kubelet[2499]: E0113 21:26:01.652923 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:02.001423 containerd[1466]: time="2025-01-13T21:26:02.001361668Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:02.002067 containerd[1466]: time="2025-01-13T21:26:02.002018811Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734667" Jan 13 21:26:02.003176 containerd[1466]: time="2025-01-13T21:26:02.003111609Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:02.004824 containerd[1466]: time="2025-01-13T21:26:02.004779274Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.933676658s" Jan 13 21:26:02.004824 containerd[1466]: time="2025-01-13T21:26:02.004817897Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:26:02.009044 containerd[1466]: time="2025-01-13T21:26:02.009021209Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:26:02.022356 containerd[1466]: time="2025-01-13T21:26:02.022282438Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:26:02.035863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336594195.mount: Deactivated successfully. Jan 13 21:26:02.036187 containerd[1466]: time="2025-01-13T21:26:02.036158470Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\"" Jan 13 21:26:02.038876 containerd[1466]: time="2025-01-13T21:26:02.038809947Z" level=info msg="StartContainer for \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\"" Jan 13 21:26:02.077462 systemd[1]: Started cri-containerd-7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a.scope - libcontainer container 7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a. Jan 13 21:26:02.105838 containerd[1466]: time="2025-01-13T21:26:02.105777774Z" level=info msg="StartContainer for \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\" returns successfully" Jan 13 21:26:02.115343 systemd[1]: cri-containerd-7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a.scope: Deactivated successfully. Jan 13 21:26:02.712570 kubelet[2499]: E0113 21:26:02.712272 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:02.805360 containerd[1466]: time="2025-01-13T21:26:02.801093762Z" level=info msg="shim disconnected" id=7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a namespace=k8s.io Jan 13 21:26:02.805360 containerd[1466]: time="2025-01-13T21:26:02.804298445Z" level=warning msg="cleaning up after shim disconnected" id=7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a namespace=k8s.io Jan 13 21:26:02.805360 containerd[1466]: time="2025-01-13T21:26:02.804314095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:03.033288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a-rootfs.mount: Deactivated successfully. Jan 13 21:26:03.715668 kubelet[2499]: E0113 21:26:03.715632 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:03.717523 containerd[1466]: time="2025-01-13T21:26:03.717465673Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:26:03.732394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173790522.mount: Deactivated successfully. Jan 13 21:26:03.733609 containerd[1466]: time="2025-01-13T21:26:03.733570797Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\"" Jan 13 21:26:03.734097 containerd[1466]: time="2025-01-13T21:26:03.734066775Z" level=info msg="StartContainer for \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\"" Jan 13 21:26:03.762466 systemd[1]: Started cri-containerd-1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d.scope - libcontainer container 1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d. Jan 13 21:26:03.787073 containerd[1466]: time="2025-01-13T21:26:03.787021235Z" level=info msg="StartContainer for \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\" returns successfully" Jan 13 21:26:03.798565 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:26:03.798804 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:03.798880 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:03.806800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:03.807123 systemd[1]: cri-containerd-1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d.scope: Deactivated successfully. Jan 13 21:26:03.824625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:03.826664 containerd[1466]: time="2025-01-13T21:26:03.826607789Z" level=info msg="shim disconnected" id=1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d namespace=k8s.io Jan 13 21:26:03.826664 containerd[1466]: time="2025-01-13T21:26:03.826663684Z" level=warning msg="cleaning up after shim disconnected" id=1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d namespace=k8s.io Jan 13 21:26:03.826976 containerd[1466]: time="2025-01-13T21:26:03.826674825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:04.033249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d-rootfs.mount: Deactivated successfully. Jan 13 21:26:04.718160 kubelet[2499]: E0113 21:26:04.718130 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:04.719828 containerd[1466]: time="2025-01-13T21:26:04.719779130Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:26:04.739553 containerd[1466]: time="2025-01-13T21:26:04.739498128Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\"" Jan 13 21:26:04.740542 containerd[1466]: time="2025-01-13T21:26:04.739973967Z" level=info msg="StartContainer for \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\"" Jan 13 21:26:04.768996 systemd[1]: Started cri-containerd-cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116.scope - libcontainer container cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116. Jan 13 21:26:04.797361 systemd[1]: cri-containerd-cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116.scope: Deactivated successfully. Jan 13 21:26:04.798614 containerd[1466]: time="2025-01-13T21:26:04.798554028Z" level=info msg="StartContainer for \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\" returns successfully" Jan 13 21:26:04.820731 containerd[1466]: time="2025-01-13T21:26:04.820662190Z" level=info msg="shim disconnected" id=cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116 namespace=k8s.io Jan 13 21:26:04.820731 containerd[1466]: time="2025-01-13T21:26:04.820719418Z" level=warning msg="cleaning up after shim disconnected" id=cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116 namespace=k8s.io Jan 13 21:26:04.820731 containerd[1466]: time="2025-01-13T21:26:04.820731601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:05.032942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116-rootfs.mount: Deactivated successfully. Jan 13 21:26:05.720942 kubelet[2499]: E0113 21:26:05.720913 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:05.722158 containerd[1466]: time="2025-01-13T21:26:05.722127511Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:26:05.979203 containerd[1466]: time="2025-01-13T21:26:05.979057321Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\"" Jan 13 21:26:05.979834 containerd[1466]: time="2025-01-13T21:26:05.979709213Z" level=info msg="StartContainer for \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\"" Jan 13 21:26:06.013513 systemd[1]: Started cri-containerd-927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab.scope - libcontainer container 927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab. Jan 13 21:26:06.036226 systemd[1]: cri-containerd-927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab.scope: Deactivated successfully. Jan 13 21:26:06.038587 containerd[1466]: time="2025-01-13T21:26:06.038540904Z" level=info msg="StartContainer for \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\" returns successfully" Jan 13 21:26:06.055903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab-rootfs.mount: Deactivated successfully. Jan 13 21:26:06.064619 containerd[1466]: time="2025-01-13T21:26:06.064543851Z" level=info msg="shim disconnected" id=927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab namespace=k8s.io Jan 13 21:26:06.064619 containerd[1466]: time="2025-01-13T21:26:06.064594477Z" level=warning msg="cleaning up after shim disconnected" id=927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab namespace=k8s.io Jan 13 21:26:06.064619 containerd[1466]: time="2025-01-13T21:26:06.064604726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:06.724924 kubelet[2499]: E0113 21:26:06.724892 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:06.726883 containerd[1466]: time="2025-01-13T21:26:06.726822229Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:26:06.748976 containerd[1466]: time="2025-01-13T21:26:06.748927016Z" level=info msg="CreateContainer within sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\"" Jan 13 21:26:06.749586 containerd[1466]: time="2025-01-13T21:26:06.749537279Z" level=info msg="StartContainer for \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\"" Jan 13 21:26:06.776501 systemd[1]: Started cri-containerd-a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75.scope - libcontainer container a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75. Jan 13 21:26:06.807585 containerd[1466]: time="2025-01-13T21:26:06.807528284Z" level=info msg="StartContainer for \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\" returns successfully" Jan 13 21:26:06.914637 kubelet[2499]: I0113 21:26:06.914602 2499 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:26:06.950120 systemd[1]: Created slice kubepods-burstable-podba51b1e1_0824_4a65_8db8_cbe16d512ec8.slice - libcontainer container kubepods-burstable-podba51b1e1_0824_4a65_8db8_cbe16d512ec8.slice. Jan 13 21:26:06.956511 systemd[1]: Created slice kubepods-burstable-pod4fbf4695_069b_4934_95b4_33e39b51ccd3.slice - libcontainer container kubepods-burstable-pod4fbf4695_069b_4934_95b4_33e39b51ccd3.slice. Jan 13 21:26:06.987463 kubelet[2499]: I0113 21:26:06.987320 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdf6m\" (UniqueName: \"kubernetes.io/projected/4fbf4695-069b-4934-95b4-33e39b51ccd3-kube-api-access-hdf6m\") pod \"coredns-6f6b679f8f-862s2\" (UID: \"4fbf4695-069b-4934-95b4-33e39b51ccd3\") " pod="kube-system/coredns-6f6b679f8f-862s2" Jan 13 21:26:06.987463 kubelet[2499]: I0113 21:26:06.987377 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba51b1e1-0824-4a65-8db8-cbe16d512ec8-config-volume\") pod \"coredns-6f6b679f8f-d27p6\" (UID: \"ba51b1e1-0824-4a65-8db8-cbe16d512ec8\") " pod="kube-system/coredns-6f6b679f8f-d27p6" Jan 13 21:26:06.987463 kubelet[2499]: I0113 21:26:06.987404 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfdx\" (UniqueName: \"kubernetes.io/projected/ba51b1e1-0824-4a65-8db8-cbe16d512ec8-kube-api-access-gvfdx\") pod \"coredns-6f6b679f8f-d27p6\" (UID: \"ba51b1e1-0824-4a65-8db8-cbe16d512ec8\") " pod="kube-system/coredns-6f6b679f8f-d27p6" Jan 13 21:26:06.987463 kubelet[2499]: I0113 21:26:06.987418 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fbf4695-069b-4934-95b4-33e39b51ccd3-config-volume\") pod \"coredns-6f6b679f8f-862s2\" (UID: \"4fbf4695-069b-4934-95b4-33e39b51ccd3\") " pod="kube-system/coredns-6f6b679f8f-862s2" Jan 13 21:26:07.254176 kubelet[2499]: E0113 21:26:07.254071 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:07.254666 containerd[1466]: time="2025-01-13T21:26:07.254621369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d27p6,Uid:ba51b1e1-0824-4a65-8db8-cbe16d512ec8,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:07.259401 kubelet[2499]: E0113 21:26:07.259373 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:07.259728 containerd[1466]: time="2025-01-13T21:26:07.259698822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-862s2,Uid:4fbf4695-069b-4934-95b4-33e39b51ccd3,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:07.728553 kubelet[2499]: E0113 21:26:07.728528 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:08.334838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818422935.mount: Deactivated successfully. Jan 13 21:26:08.621853 containerd[1466]: time="2025-01-13T21:26:08.621732719Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:08.622479 containerd[1466]: time="2025-01-13T21:26:08.622427941Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906617" Jan 13 21:26:08.623530 containerd[1466]: time="2025-01-13T21:26:08.623505644Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:08.624668 containerd[1466]: time="2025-01-13T21:26:08.624639623Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.615531839s" Jan 13 21:26:08.624705 containerd[1466]: time="2025-01-13T21:26:08.624675420Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:26:08.631734 containerd[1466]: time="2025-01-13T21:26:08.631701846Z" level=info msg="CreateContainer within sandbox \"39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:26:08.645007 containerd[1466]: time="2025-01-13T21:26:08.644956302Z" level=info msg="CreateContainer within sandbox \"39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\"" Jan 13 21:26:08.645587 containerd[1466]: time="2025-01-13T21:26:08.645538581Z" level=info msg="StartContainer for \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\"" Jan 13 21:26:08.677479 systemd[1]: Started cri-containerd-0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b.scope - libcontainer container 0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b. Jan 13 21:26:08.703169 containerd[1466]: time="2025-01-13T21:26:08.703132688Z" level=info msg="StartContainer for \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\" returns successfully" Jan 13 21:26:08.751926 kubelet[2499]: E0113 21:26:08.751870 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:08.757364 kubelet[2499]: I0113 21:26:08.755800 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdzcv" podStartSLOduration=7.813414625 podStartE2EDuration="16.755780152s" podCreationTimestamp="2025-01-13 21:25:52 +0000 UTC" firstStartedPulling="2025-01-13 21:25:53.066473057 +0000 UTC m=+5.675008353" lastFinishedPulling="2025-01-13 21:26:02.008838584 +0000 UTC m=+14.617373880" observedRunningTime="2025-01-13 21:26:07.748234138 +0000 UTC m=+20.356769434" watchObservedRunningTime="2025-01-13 21:26:08.755780152 +0000 UTC m=+21.364315448" Jan 13 21:26:08.757364 kubelet[2499]: I0113 21:26:08.756012 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-sgc4c" podStartSLOduration=1.209087048 podStartE2EDuration="16.756009194s" podCreationTimestamp="2025-01-13 21:25:52 +0000 UTC" firstStartedPulling="2025-01-13 21:25:53.083683382 +0000 UTC m=+5.692218678" lastFinishedPulling="2025-01-13 21:26:08.630605528 +0000 UTC m=+21.239140824" observedRunningTime="2025-01-13 21:26:08.755997182 +0000 UTC m=+21.364532478" watchObservedRunningTime="2025-01-13 21:26:08.756009194 +0000 UTC m=+21.364544490" Jan 13 21:26:08.762411 kubelet[2499]: E0113 21:26:08.762375 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:09.743542 kubelet[2499]: E0113 21:26:09.743509 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:09.743797 kubelet[2499]: E0113 21:26:09.743622 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:12.058411 systemd-networkd[1392]: cilium_host: Link UP Jan 13 21:26:12.059047 systemd-networkd[1392]: cilium_net: Link UP Jan 13 21:26:12.059377 systemd-networkd[1392]: cilium_net: Gained carrier Jan 13 21:26:12.059989 systemd-networkd[1392]: cilium_host: Gained carrier Jan 13 21:26:12.175241 systemd-networkd[1392]: cilium_vxlan: Link UP Jan 13 21:26:12.175253 systemd-networkd[1392]: cilium_vxlan: Gained carrier Jan 13 21:26:12.387642 kernel: NET: Registered PF_ALG protocol family Jan 13 21:26:12.512553 systemd-networkd[1392]: cilium_host: Gained IPv6LL Jan 13 21:26:12.688878 systemd-networkd[1392]: cilium_net: Gained IPv6LL Jan 13 21:26:13.049082 systemd-networkd[1392]: lxc_health: Link UP Jan 13 21:26:13.052638 systemd-networkd[1392]: lxc_health: Gained carrier Jan 13 21:26:13.340719 systemd-networkd[1392]: lxce64e655d905a: Link UP Jan 13 21:26:13.358492 kernel: eth0: renamed from tmpe880d Jan 13 21:26:13.368652 systemd-networkd[1392]: lxcdb6a943a8470: Link UP Jan 13 21:26:13.370348 kernel: eth0: renamed from tmp9d3ab Jan 13 21:26:13.376614 systemd-networkd[1392]: lxcdb6a943a8470: Gained carrier Jan 13 21:26:13.376976 systemd-networkd[1392]: lxce64e655d905a: Gained carrier Jan 13 21:26:13.971425 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Jan 13 21:26:14.731227 kubelet[2499]: E0113 21:26:14.731182 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:15.120497 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jan 13 21:26:15.120947 systemd-networkd[1392]: lxcdb6a943a8470: Gained IPv6LL Jan 13 21:26:15.376531 systemd-networkd[1392]: lxce64e655d905a: Gained IPv6LL Jan 13 21:26:16.812072 containerd[1466]: time="2025-01-13T21:26:16.811940652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:16.812072 containerd[1466]: time="2025-01-13T21:26:16.812039317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:16.812568 containerd[1466]: time="2025-01-13T21:26:16.812084483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:16.812874 containerd[1466]: time="2025-01-13T21:26:16.812764241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:16.813142 containerd[1466]: time="2025-01-13T21:26:16.812937919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:16.813142 containerd[1466]: time="2025-01-13T21:26:16.812982302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:16.813142 containerd[1466]: time="2025-01-13T21:26:16.813125802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:16.813311 containerd[1466]: time="2025-01-13T21:26:16.813269553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:16.834486 systemd[1]: Started cri-containerd-9d3ab493c5afe52fdc642988e8ae5a3ab56ca8a45b66a7516a1ddf0f5a71989b.scope - libcontainer container 9d3ab493c5afe52fdc642988e8ae5a3ab56ca8a45b66a7516a1ddf0f5a71989b. Jan 13 21:26:16.840458 systemd[1]: Started cri-containerd-e880df2be604f56cecbb775d5de7e50e7412b532a2f9e620cc25916414a3fc2f.scope - libcontainer container e880df2be604f56cecbb775d5de7e50e7412b532a2f9e620cc25916414a3fc2f. Jan 13 21:26:16.848042 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:16.852904 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:16.873892 containerd[1466]: time="2025-01-13T21:26:16.873838588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d27p6,Uid:ba51b1e1-0824-4a65-8db8-cbe16d512ec8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d3ab493c5afe52fdc642988e8ae5a3ab56ca8a45b66a7516a1ddf0f5a71989b\"" Jan 13 21:26:16.874701 kubelet[2499]: E0113 21:26:16.874670 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:16.876660 containerd[1466]: time="2025-01-13T21:26:16.876605805Z" level=info msg="CreateContainer within sandbox \"9d3ab493c5afe52fdc642988e8ae5a3ab56ca8a45b66a7516a1ddf0f5a71989b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:26:16.881272 containerd[1466]: time="2025-01-13T21:26:16.881225099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-862s2,Uid:4fbf4695-069b-4934-95b4-33e39b51ccd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e880df2be604f56cecbb775d5de7e50e7412b532a2f9e620cc25916414a3fc2f\"" Jan 13 21:26:16.882188 kubelet[2499]: E0113 21:26:16.882168 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:16.884022 containerd[1466]: time="2025-01-13T21:26:16.883976919Z" level=info msg="CreateContainer within sandbox \"e880df2be604f56cecbb775d5de7e50e7412b532a2f9e620cc25916414a3fc2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:26:16.903599 containerd[1466]: time="2025-01-13T21:26:16.903546359Z" level=info msg="CreateContainer within sandbox \"9d3ab493c5afe52fdc642988e8ae5a3ab56ca8a45b66a7516a1ddf0f5a71989b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2fcc658bf4d74cf7d15c9ef080da131556222f039b73e6944961dab150f0515\"" Jan 13 21:26:16.904111 containerd[1466]: time="2025-01-13T21:26:16.904083791Z" level=info msg="StartContainer for \"f2fcc658bf4d74cf7d15c9ef080da131556222f039b73e6944961dab150f0515\"" Jan 13 21:26:16.909017 containerd[1466]: time="2025-01-13T21:26:16.908909082Z" level=info msg="CreateContainer within sandbox \"e880df2be604f56cecbb775d5de7e50e7412b532a2f9e620cc25916414a3fc2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80111fe5f30ceb668075478b4a8236b1f8facac7ccb2cbe98aabe2f92ac28c9b\"" Jan 13 21:26:16.909501 containerd[1466]: time="2025-01-13T21:26:16.909467413Z" level=info msg="StartContainer for \"80111fe5f30ceb668075478b4a8236b1f8facac7ccb2cbe98aabe2f92ac28c9b\"" Jan 13 21:26:16.931488 systemd[1]: Started cri-containerd-f2fcc658bf4d74cf7d15c9ef080da131556222f039b73e6944961dab150f0515.scope - libcontainer container f2fcc658bf4d74cf7d15c9ef080da131556222f039b73e6944961dab150f0515. Jan 13 21:26:16.934121 systemd[1]: Started cri-containerd-80111fe5f30ceb668075478b4a8236b1f8facac7ccb2cbe98aabe2f92ac28c9b.scope - libcontainer container 80111fe5f30ceb668075478b4a8236b1f8facac7ccb2cbe98aabe2f92ac28c9b. Jan 13 21:26:16.969712 containerd[1466]: time="2025-01-13T21:26:16.969665951Z" level=info msg="StartContainer for \"80111fe5f30ceb668075478b4a8236b1f8facac7ccb2cbe98aabe2f92ac28c9b\" returns successfully" Jan 13 21:26:16.969712 containerd[1466]: time="2025-01-13T21:26:16.969706116Z" level=info msg="StartContainer for \"f2fcc658bf4d74cf7d15c9ef080da131556222f039b73e6944961dab150f0515\" returns successfully" Jan 13 21:26:16.977882 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:54834.service - OpenSSH per-connection server daemon (10.0.0.1:54834). Jan 13 21:26:17.020953 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 54834 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:17.022717 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:17.029425 systemd-logind[1447]: New session 8 of user core. Jan 13 21:26:17.034653 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:26:17.284461 sshd[3884]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:17.288923 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:54834.service: Deactivated successfully. Jan 13 21:26:17.290743 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:26:17.291365 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:26:17.292162 systemd-logind[1447]: Removed session 8. Jan 13 21:26:17.758594 kubelet[2499]: E0113 21:26:17.758351 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:17.760438 kubelet[2499]: E0113 21:26:17.760415 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:17.896674 kubelet[2499]: I0113 21:26:17.896553 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-862s2" podStartSLOduration=25.896537688 podStartE2EDuration="25.896537688s" podCreationTimestamp="2025-01-13 21:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:17.896284572 +0000 UTC m=+30.504819868" watchObservedRunningTime="2025-01-13 21:26:17.896537688 +0000 UTC m=+30.505072984" Jan 13 21:26:17.925453 kubelet[2499]: I0113 21:26:17.925362 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-d27p6" podStartSLOduration=25.925324775 podStartE2EDuration="25.925324775s" podCreationTimestamp="2025-01-13 21:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:17.909869524 +0000 UTC m=+30.518404820" watchObservedRunningTime="2025-01-13 21:26:17.925324775 +0000 UTC m=+30.533860071" Jan 13 21:26:18.762203 kubelet[2499]: E0113 21:26:18.762119 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:18.762373 kubelet[2499]: E0113 21:26:18.762242 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:19.655310 kubelet[2499]: I0113 21:26:19.655247 2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:26:19.656113 kubelet[2499]: E0113 21:26:19.656080 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:19.763881 kubelet[2499]: E0113 21:26:19.763677 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:19.763881 kubelet[2499]: E0113 21:26:19.763808 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:19.764024 kubelet[2499]: E0113 21:26:19.763900 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:22.300052 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:44332.service - OpenSSH per-connection server daemon (10.0.0.1:44332). Jan 13 21:26:22.337299 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 44332 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:22.339014 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:22.343547 systemd-logind[1447]: New session 9 of user core. Jan 13 21:26:22.356544 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:26:22.499294 sshd[3927]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:22.503141 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:44332.service: Deactivated successfully. Jan 13 21:26:22.505259 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:26:22.505966 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:26:22.506797 systemd-logind[1447]: Removed session 9. Jan 13 21:26:27.518900 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:48424.service - OpenSSH per-connection server daemon (10.0.0.1:48424). Jan 13 21:26:27.555372 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 48424 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:27.556994 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:27.561244 systemd-logind[1447]: New session 10 of user core. Jan 13 21:26:27.570554 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:26:27.686563 sshd[3944]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:27.691272 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:48424.service: Deactivated successfully. Jan 13 21:26:27.693705 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:26:27.694418 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:26:27.695375 systemd-logind[1447]: Removed session 10. Jan 13 21:26:32.698309 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:48438.service - OpenSSH per-connection server daemon (10.0.0.1:48438). Jan 13 21:26:32.733226 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 48438 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:32.734791 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:32.738781 systemd-logind[1447]: New session 11 of user core. Jan 13 21:26:32.745474 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:26:32.848943 sshd[3960]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:32.863270 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:48438.service: Deactivated successfully. Jan 13 21:26:32.865103 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:26:32.866624 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:26:32.877613 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:48446.service - OpenSSH per-connection server daemon (10.0.0.1:48446). Jan 13 21:26:32.878488 systemd-logind[1447]: Removed session 11. Jan 13 21:26:32.908409 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 48446 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:32.910136 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:32.914033 systemd-logind[1447]: New session 12 of user core. Jan 13 21:26:32.924490 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:26:33.114301 sshd[3975]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:33.122642 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:48446.service: Deactivated successfully. Jan 13 21:26:33.124586 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:26:33.126383 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:26:33.136677 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:48460.service - OpenSSH per-connection server daemon (10.0.0.1:48460). Jan 13 21:26:33.138915 systemd-logind[1447]: Removed session 12. Jan 13 21:26:33.166177 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 48460 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:33.167898 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:33.171864 systemd-logind[1447]: New session 13 of user core. Jan 13 21:26:33.178462 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:26:33.289121 sshd[3988]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:33.294056 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:48460.service: Deactivated successfully. Jan 13 21:26:33.296109 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:26:33.296914 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:26:33.297861 systemd-logind[1447]: Removed session 13. Jan 13 21:26:38.304361 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:59280.service - OpenSSH per-connection server daemon (10.0.0.1:59280). Jan 13 21:26:38.339692 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 59280 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:38.341152 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:38.345168 systemd-logind[1447]: New session 14 of user core. Jan 13 21:26:38.355489 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:26:38.459029 sshd[4003]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:38.462453 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:59280.service: Deactivated successfully. Jan 13 21:26:38.464609 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:26:38.466405 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:26:38.467313 systemd-logind[1447]: Removed session 14. Jan 13 21:26:43.470427 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:59284.service - OpenSSH per-connection server daemon (10.0.0.1:59284). Jan 13 21:26:43.504373 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 59284 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:43.506042 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:43.510457 systemd-logind[1447]: New session 15 of user core. Jan 13 21:26:43.520514 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:26:43.623200 sshd[4017]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:43.636995 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:59284.service: Deactivated successfully. Jan 13 21:26:43.638575 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:26:43.640176 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:26:43.644577 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:59294.service - OpenSSH per-connection server daemon (10.0.0.1:59294). Jan 13 21:26:43.645491 systemd-logind[1447]: Removed session 15. Jan 13 21:26:43.677266 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 59294 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:43.678884 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:43.682384 systemd-logind[1447]: New session 16 of user core. Jan 13 21:26:43.691476 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:26:43.916066 sshd[4032]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:43.928431 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:59294.service: Deactivated successfully. Jan 13 21:26:43.930554 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:26:43.932035 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:26:43.933619 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:59300.service - OpenSSH per-connection server daemon (10.0.0.1:59300). Jan 13 21:26:43.934495 systemd-logind[1447]: Removed session 16. Jan 13 21:26:43.971986 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 59300 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:43.973514 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:43.977355 systemd-logind[1447]: New session 17 of user core. Jan 13 21:26:43.986528 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:26:45.327958 sshd[4044]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:45.340733 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:59302.service - OpenSSH per-connection server daemon (10.0.0.1:59302). Jan 13 21:26:45.341325 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:59300.service: Deactivated successfully. Jan 13 21:26:45.344758 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:26:45.348670 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:26:45.350228 systemd-logind[1447]: Removed session 17. Jan 13 21:26:45.378187 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 59302 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:45.379767 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:45.384161 systemd-logind[1447]: New session 18 of user core. Jan 13 21:26:45.392472 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:26:45.703875 sshd[4062]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:45.716248 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:59302.service: Deactivated successfully. Jan 13 21:26:45.718595 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:26:45.719417 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:26:45.732756 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:59308.service - OpenSSH per-connection server daemon (10.0.0.1:59308). Jan 13 21:26:45.733763 systemd-logind[1447]: Removed session 18. Jan 13 21:26:45.765170 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 59308 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:45.767087 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:45.771553 systemd-logind[1447]: New session 19 of user core. Jan 13 21:26:45.781508 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:26:45.884863 sshd[4076]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:45.888632 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:59308.service: Deactivated successfully. Jan 13 21:26:45.890621 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:26:45.891320 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:26:45.892231 systemd-logind[1447]: Removed session 19. Jan 13 21:26:50.896395 systemd[1]: Started sshd@20-10.0.0.102:22-10.0.0.1:46270.service - OpenSSH per-connection server daemon (10.0.0.1:46270). Jan 13 21:26:50.931039 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 46270 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:50.932625 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:50.936530 systemd-logind[1447]: New session 20 of user core. Jan 13 21:26:50.949504 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:26:51.060003 sshd[4092]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:51.063581 systemd[1]: sshd@20-10.0.0.102:22-10.0.0.1:46270.service: Deactivated successfully. Jan 13 21:26:51.065515 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:26:51.066151 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:26:51.066998 systemd-logind[1447]: Removed session 20. Jan 13 21:26:56.071229 systemd[1]: Started sshd@21-10.0.0.102:22-10.0.0.1:46282.service - OpenSSH per-connection server daemon (10.0.0.1:46282). Jan 13 21:26:56.105429 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 46282 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:56.106968 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:56.110586 systemd-logind[1447]: New session 21 of user core. Jan 13 21:26:56.120449 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:26:56.223361 sshd[4111]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:56.227228 systemd[1]: sshd@21-10.0.0.102:22-10.0.0.1:46282.service: Deactivated successfully. Jan 13 21:26:56.228991 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:26:56.229571 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:26:56.230361 systemd-logind[1447]: Removed session 21. Jan 13 21:27:01.235325 systemd[1]: Started sshd@22-10.0.0.102:22-10.0.0.1:47482.service - OpenSSH per-connection server daemon (10.0.0.1:47482). Jan 13 21:27:01.271149 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 47482 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:01.272648 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:01.276656 systemd-logind[1447]: New session 22 of user core. Jan 13 21:27:01.292484 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:27:01.392546 sshd[4126]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:01.396476 systemd[1]: sshd@22-10.0.0.102:22-10.0.0.1:47482.service: Deactivated successfully. Jan 13 21:27:01.398521 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:27:01.399153 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:27:01.400109 systemd-logind[1447]: Removed session 22. Jan 13 21:27:06.404909 systemd[1]: Started sshd@23-10.0.0.102:22-10.0.0.1:47488.service - OpenSSH per-connection server daemon (10.0.0.1:47488). Jan 13 21:27:06.439006 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 47488 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:06.440446 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:06.444231 systemd-logind[1447]: New session 23 of user core. Jan 13 21:27:06.451449 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:27:06.548197 sshd[4140]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:06.560853 systemd[1]: sshd@23-10.0.0.102:22-10.0.0.1:47488.service: Deactivated successfully. Jan 13 21:27:06.562345 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:27:06.563863 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:27:06.573576 systemd[1]: Started sshd@24-10.0.0.102:22-10.0.0.1:47490.service - OpenSSH per-connection server daemon (10.0.0.1:47490). Jan 13 21:27:06.574380 systemd-logind[1447]: Removed session 23. Jan 13 21:27:06.603454 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 47490 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:06.604845 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:06.608567 systemd-logind[1447]: New session 24 of user core. Jan 13 21:27:06.616445 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:27:08.109825 containerd[1466]: time="2025-01-13T21:27:08.109658876Z" level=info msg="StopContainer for \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\" with timeout 30 (s)" Jan 13 21:27:08.111526 containerd[1466]: time="2025-01-13T21:27:08.111482127Z" level=info msg="Stop container \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\" with signal terminated" Jan 13 21:27:08.128591 systemd[1]: cri-containerd-0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b.scope: Deactivated successfully. Jan 13 21:27:08.139170 containerd[1466]: time="2025-01-13T21:27:08.138974915Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:27:08.151862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b-rootfs.mount: Deactivated successfully. Jan 13 21:27:08.152163 containerd[1466]: time="2025-01-13T21:27:08.152013943Z" level=info msg="StopContainer for \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\" with timeout 2 (s)" Jan 13 21:27:08.152306 containerd[1466]: time="2025-01-13T21:27:08.152284380Z" level=info msg="Stop container \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\" with signal terminated" Jan 13 21:27:08.159282 containerd[1466]: time="2025-01-13T21:27:08.159216206Z" level=info msg="shim disconnected" id=0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b namespace=k8s.io Jan 13 21:27:08.159282 containerd[1466]: time="2025-01-13T21:27:08.159277774Z" level=warning msg="cleaning up after shim disconnected" id=0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b namespace=k8s.io Jan 13 21:27:08.159282 containerd[1466]: time="2025-01-13T21:27:08.159286961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:08.160382 systemd-networkd[1392]: lxc_health: Link DOWN Jan 13 21:27:08.160391 systemd-networkd[1392]: lxc_health: Lost carrier Jan 13 21:27:08.177980 containerd[1466]: time="2025-01-13T21:27:08.177922907Z" level=info msg="StopContainer for \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\" returns successfully" Jan 13 21:27:08.182185 containerd[1466]: time="2025-01-13T21:27:08.182150650Z" level=info msg="StopPodSandbox for \"39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab\"" Jan 13 21:27:08.182236 containerd[1466]: time="2025-01-13T21:27:08.182197228Z" level=info msg="Container to stop \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.184200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab-shm.mount: Deactivated successfully. Jan 13 21:27:08.190486 systemd[1]: cri-containerd-a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75.scope: Deactivated successfully. Jan 13 21:27:08.190782 systemd[1]: cri-containerd-a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75.scope: Consumed 6.929s CPU time. Jan 13 21:27:08.191624 systemd[1]: cri-containerd-39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab.scope: Deactivated successfully. Jan 13 21:27:08.214434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75-rootfs.mount: Deactivated successfully. Jan 13 21:27:08.269851 containerd[1466]: time="2025-01-13T21:27:08.269764730Z" level=info msg="shim disconnected" id=39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab namespace=k8s.io Jan 13 21:27:08.269851 containerd[1466]: time="2025-01-13T21:27:08.269844713Z" level=warning msg="cleaning up after shim disconnected" id=39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab namespace=k8s.io Jan 13 21:27:08.269851 containerd[1466]: time="2025-01-13T21:27:08.269853409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:08.271106 containerd[1466]: time="2025-01-13T21:27:08.270796401Z" level=info msg="shim disconnected" id=a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75 namespace=k8s.io Jan 13 21:27:08.271106 containerd[1466]: time="2025-01-13T21:27:08.270928312Z" level=warning msg="cleaning up after shim disconnected" id=a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75 namespace=k8s.io Jan 13 21:27:08.271106 containerd[1466]: time="2025-01-13T21:27:08.270939825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:08.292514 containerd[1466]: time="2025-01-13T21:27:08.292459637Z" level=info msg="TearDown network for sandbox \"39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab\" successfully" Jan 13 21:27:08.292514 containerd[1466]: time="2025-01-13T21:27:08.292498791Z" level=info msg="StopPodSandbox for \"39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab\" returns successfully" Jan 13 21:27:08.302188 containerd[1466]: time="2025-01-13T21:27:08.302138949Z" level=info msg="StopContainer for \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\" returns successfully" Jan 13 21:27:08.302544 containerd[1466]: time="2025-01-13T21:27:08.302507894Z" level=info msg="StopPodSandbox for \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\"" Jan 13 21:27:08.302544 containerd[1466]: time="2025-01-13T21:27:08.302535176Z" level=info msg="Container to stop \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.302626 containerd[1466]: time="2025-01-13T21:27:08.302549543Z" level=info msg="Container to stop \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.302626 containerd[1466]: time="2025-01-13T21:27:08.302559743Z" level=info msg="Container to stop \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.302626 containerd[1466]: time="2025-01-13T21:27:08.302568981Z" level=info msg="Container to stop \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.302626 containerd[1466]: time="2025-01-13T21:27:08.302577897Z" level=info msg="Container to stop \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.308489 systemd[1]: cri-containerd-d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60.scope: Deactivated successfully. Jan 13 21:27:08.339573 kubelet[2499]: I0113 21:27:08.339526 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07608cf7-4cb1-454d-b687-59338432ac98-cilium-config-path\") pod \"07608cf7-4cb1-454d-b687-59338432ac98\" (UID: \"07608cf7-4cb1-454d-b687-59338432ac98\") " Jan 13 21:27:08.339573 kubelet[2499]: I0113 21:27:08.339567 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pncj\" (UniqueName: \"kubernetes.io/projected/07608cf7-4cb1-454d-b687-59338432ac98-kube-api-access-7pncj\") pod \"07608cf7-4cb1-454d-b687-59338432ac98\" (UID: \"07608cf7-4cb1-454d-b687-59338432ac98\") " Jan 13 21:27:08.342961 kubelet[2499]: I0113 21:27:08.342935 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07608cf7-4cb1-454d-b687-59338432ac98-kube-api-access-7pncj" (OuterVolumeSpecName: "kube-api-access-7pncj") pod "07608cf7-4cb1-454d-b687-59338432ac98" (UID: "07608cf7-4cb1-454d-b687-59338432ac98"). InnerVolumeSpecName "kube-api-access-7pncj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:27:08.343006 kubelet[2499]: I0113 21:27:08.342958 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07608cf7-4cb1-454d-b687-59338432ac98-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "07608cf7-4cb1-454d-b687-59338432ac98" (UID: "07608cf7-4cb1-454d-b687-59338432ac98"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:27:08.429638 containerd[1466]: time="2025-01-13T21:27:08.429396231Z" level=info msg="shim disconnected" id=d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60 namespace=k8s.io Jan 13 21:27:08.429806 containerd[1466]: time="2025-01-13T21:27:08.429774082Z" level=warning msg="cleaning up after shim disconnected" id=d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60 namespace=k8s.io Jan 13 21:27:08.429806 containerd[1466]: time="2025-01-13T21:27:08.429793670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:08.440131 kubelet[2499]: I0113 21:27:08.440084 2499 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7pncj\" (UniqueName: \"kubernetes.io/projected/07608cf7-4cb1-454d-b687-59338432ac98-kube-api-access-7pncj\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.440131 kubelet[2499]: I0113 21:27:08.440111 2499 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07608cf7-4cb1-454d-b687-59338432ac98-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.443642 containerd[1466]: time="2025-01-13T21:27:08.443600243Z" level=info msg="TearDown network for sandbox \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" successfully" Jan 13 21:27:08.443707 containerd[1466]: time="2025-01-13T21:27:08.443640871Z" level=info msg="StopPodSandbox for \"d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60\" returns successfully" Jan 13 21:27:08.540623 kubelet[2499]: I0113 21:27:08.540587 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hostproc\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540623 kubelet[2499]: I0113 21:27:08.540622 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg2dz\" (UniqueName: \"kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-kube-api-access-rg2dz\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540623 kubelet[2499]: I0113 21:27:08.540640 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hubble-tls\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540837 kubelet[2499]: I0113 21:27:08.540653 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cni-path\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540837 kubelet[2499]: I0113 21:27:08.540670 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a89f52d-034f-4239-ab0d-8327fda3fdbe-clustermesh-secrets\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540837 kubelet[2499]: I0113 21:27:08.540688 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-config-path\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540837 kubelet[2499]: I0113 21:27:08.540700 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-xtables-lock\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540837 kubelet[2499]: I0113 21:27:08.540723 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-kernel\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540837 kubelet[2499]: I0113 21:27:08.540737 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-run\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540977 kubelet[2499]: I0113 21:27:08.540751 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-net\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540977 kubelet[2499]: I0113 21:27:08.540763 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-bpf-maps\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540977 kubelet[2499]: I0113 21:27:08.540749 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540977 kubelet[2499]: I0113 21:27:08.540778 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-etc-cni-netd\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540977 kubelet[2499]: I0113 21:27:08.540849 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-cgroup\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.540977 kubelet[2499]: I0113 21:27:08.540870 2499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-lib-modules\") pod \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\" (UID: \"0a89f52d-034f-4239-ab0d-8327fda3fdbe\") " Jan 13 21:27:08.541110 kubelet[2499]: I0113 21:27:08.540918 2499 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.541110 kubelet[2499]: I0113 21:27:08.540814 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.541110 kubelet[2499]: I0113 21:27:08.540825 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.541110 kubelet[2499]: I0113 21:27:08.540939 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.541110 kubelet[2499]: I0113 21:27:08.540968 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.541222 kubelet[2499]: I0113 21:27:08.540988 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.543414 kubelet[2499]: I0113 21:27:08.543384 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.543656 kubelet[2499]: I0113 21:27:08.543545 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.543656 kubelet[2499]: I0113 21:27:08.543566 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.543656 kubelet[2499]: I0113 21:27:08.543574 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.543884 kubelet[2499]: I0113 21:27:08.543850 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a89f52d-034f-4239-ab0d-8327fda3fdbe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:27:08.543950 kubelet[2499]: I0113 21:27:08.543895 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:27:08.544529 kubelet[2499]: I0113 21:27:08.544491 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-kube-api-access-rg2dz" (OuterVolumeSpecName: "kube-api-access-rg2dz") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "kube-api-access-rg2dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:27:08.544903 kubelet[2499]: I0113 21:27:08.544877 2499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a89f52d-034f-4239-ab0d-8327fda3fdbe" (UID: "0a89f52d-034f-4239-ab0d-8327fda3fdbe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:27:08.641631 kubelet[2499]: I0113 21:27:08.641582 2499 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641631 kubelet[2499]: I0113 21:27:08.641608 2499 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641631 kubelet[2499]: I0113 21:27:08.641618 2499 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641631 kubelet[2499]: I0113 21:27:08.641628 2499 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641631 kubelet[2499]: I0113 21:27:08.641636 2499 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641631 kubelet[2499]: I0113 21:27:08.641643 2499 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641872 kubelet[2499]: I0113 21:27:08.641651 2499 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rg2dz\" (UniqueName: \"kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-kube-api-access-rg2dz\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641872 kubelet[2499]: I0113 21:27:08.641659 2499 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a89f52d-034f-4239-ab0d-8327fda3fdbe-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641872 kubelet[2499]: I0113 21:27:08.641667 2499 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a89f52d-034f-4239-ab0d-8327fda3fdbe-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641872 kubelet[2499]: I0113 21:27:08.641674 2499 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641872 kubelet[2499]: I0113 21:27:08.641682 2499 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a89f52d-034f-4239-ab0d-8327fda3fdbe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641872 kubelet[2499]: I0113 21:27:08.641689 2499 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.641872 kubelet[2499]: I0113 21:27:08.641697 2499 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a89f52d-034f-4239-ab0d-8327fda3fdbe-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 21:27:08.853591 kubelet[2499]: I0113 21:27:08.853546 2499 scope.go:117] "RemoveContainer" containerID="0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b" Jan 13 21:27:08.855832 containerd[1466]: time="2025-01-13T21:27:08.855519159Z" level=info msg="RemoveContainer for \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\"" Jan 13 21:27:08.859930 containerd[1466]: time="2025-01-13T21:27:08.859887401Z" level=info msg="RemoveContainer for \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\" returns successfully" Jan 13 21:27:08.860784 kubelet[2499]: I0113 21:27:08.860529 2499 scope.go:117] "RemoveContainer" containerID="0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b" Jan 13 21:27:08.860875 systemd[1]: Removed slice kubepods-besteffort-pod07608cf7_4cb1_454d_b687_59338432ac98.slice - libcontainer container kubepods-besteffort-pod07608cf7_4cb1_454d_b687_59338432ac98.slice. Jan 13 21:27:08.866157 systemd[1]: Removed slice kubepods-burstable-pod0a89f52d_034f_4239_ab0d_8327fda3fdbe.slice - libcontainer container kubepods-burstable-pod0a89f52d_034f_4239_ab0d_8327fda3fdbe.slice. Jan 13 21:27:08.866276 systemd[1]: kubepods-burstable-pod0a89f52d_034f_4239_ab0d_8327fda3fdbe.slice: Consumed 7.026s CPU time. Jan 13 21:27:08.868236 containerd[1466]: time="2025-01-13T21:27:08.867417108Z" level=error msg="ContainerStatus for \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\": not found" Jan 13 21:27:08.882943 kubelet[2499]: E0113 21:27:08.882900 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\": not found" containerID="0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b" Jan 13 21:27:08.883107 kubelet[2499]: I0113 21:27:08.882941 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b"} err="failed to get container status \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0189367c8a2166c332192aa084e295fe2c2dfa2e49d36147ee281304bee6550b\": not found" Jan 13 21:27:08.883107 kubelet[2499]: I0113 21:27:08.883022 2499 scope.go:117] "RemoveContainer" containerID="a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75" Jan 13 21:27:08.884666 containerd[1466]: time="2025-01-13T21:27:08.884303764Z" level=info msg="RemoveContainer for \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\"" Jan 13 21:27:08.888129 containerd[1466]: time="2025-01-13T21:27:08.888088932Z" level=info msg="RemoveContainer for \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\" returns successfully" Jan 13 21:27:08.888327 kubelet[2499]: I0113 21:27:08.888288 2499 scope.go:117] "RemoveContainer" containerID="927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab" Jan 13 21:27:08.889360 containerd[1466]: time="2025-01-13T21:27:08.889315143Z" level=info msg="RemoveContainer for \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\"" Jan 13 21:27:08.892401 containerd[1466]: time="2025-01-13T21:27:08.892375637Z" level=info msg="RemoveContainer for \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\" returns successfully" Jan 13 21:27:08.892565 kubelet[2499]: I0113 21:27:08.892543 2499 scope.go:117] "RemoveContainer" containerID="cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116" Jan 13 21:27:08.893364 containerd[1466]: time="2025-01-13T21:27:08.893323537Z" level=info msg="RemoveContainer for \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\"" Jan 13 21:27:08.896383 containerd[1466]: time="2025-01-13T21:27:08.896361198Z" level=info msg="RemoveContainer for \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\" returns successfully" Jan 13 21:27:08.896556 kubelet[2499]: I0113 21:27:08.896492 2499 scope.go:117] "RemoveContainer" containerID="1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d" Jan 13 21:27:08.897287 containerd[1466]: time="2025-01-13T21:27:08.897267078Z" level=info msg="RemoveContainer for \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\"" Jan 13 21:27:08.900227 containerd[1466]: time="2025-01-13T21:27:08.900202763Z" level=info msg="RemoveContainer for \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\" returns successfully" Jan 13 21:27:08.900390 kubelet[2499]: I0113 21:27:08.900361 2499 scope.go:117] "RemoveContainer" containerID="7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a" Jan 13 21:27:08.901352 containerd[1466]: time="2025-01-13T21:27:08.901288797Z" level=info msg="RemoveContainer for \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\"" Jan 13 21:27:08.905253 containerd[1466]: time="2025-01-13T21:27:08.905221366Z" level=info msg="RemoveContainer for \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\" returns successfully" Jan 13 21:27:08.905406 kubelet[2499]: I0113 21:27:08.905372 2499 scope.go:117] "RemoveContainer" containerID="a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75" Jan 13 21:27:08.905746 containerd[1466]: time="2025-01-13T21:27:08.905696794Z" level=error msg="ContainerStatus for \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\": not found" Jan 13 21:27:08.905887 kubelet[2499]: E0113 21:27:08.905845 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\": not found" containerID="a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75" Jan 13 21:27:08.905941 kubelet[2499]: I0113 21:27:08.905885 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75"} err="failed to get container status \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\": rpc error: code = NotFound desc = an error occurred when try to find container \"a49fe94975e1fed9572ca603deee1328dfc3a5ce54922a764026a9d2ea1dec75\": not found" Jan 13 21:27:08.905941 kubelet[2499]: I0113 21:27:08.905913 2499 scope.go:117] "RemoveContainer" containerID="927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab" Jan 13 21:27:08.906086 containerd[1466]: time="2025-01-13T21:27:08.906047393Z" level=error msg="ContainerStatus for \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\": not found" Jan 13 21:27:08.906167 kubelet[2499]: E0113 21:27:08.906143 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\": not found" containerID="927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab" Jan 13 21:27:08.906203 kubelet[2499]: I0113 21:27:08.906165 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab"} err="failed to get container status \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"927e9582080fd685540fef96dd3fbefe2676792a65603ffcfd8b54faf57b72ab\": not found" Jan 13 21:27:08.906203 kubelet[2499]: I0113 21:27:08.906181 2499 scope.go:117] "RemoveContainer" containerID="cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116" Jan 13 21:27:08.906379 containerd[1466]: time="2025-01-13T21:27:08.906325675Z" level=error msg="ContainerStatus for \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\": not found" Jan 13 21:27:08.906468 kubelet[2499]: E0113 21:27:08.906440 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\": not found" containerID="cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116" Jan 13 21:27:08.906496 kubelet[2499]: I0113 21:27:08.906464 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116"} err="failed to get container status \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\": rpc error: code = NotFound desc = an error occurred when try to find container \"cad248a5fc4edaea2506362d5e7d8eb7b65ed94936a97ad6302b01f6812ca116\": not found" Jan 13 21:27:08.906496 kubelet[2499]: I0113 21:27:08.906483 2499 scope.go:117] "RemoveContainer" containerID="1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d" Jan 13 21:27:08.906630 containerd[1466]: time="2025-01-13T21:27:08.906602453Z" level=error msg="ContainerStatus for \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\": not found" Jan 13 21:27:08.906804 kubelet[2499]: E0113 21:27:08.906783 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\": not found" containerID="1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d" Jan 13 21:27:08.906849 kubelet[2499]: I0113 21:27:08.906807 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d"} err="failed to get container status \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d3d00c789465acaaedebac190ae5595ac9e263d5e4720717d7d6613ad58021d\": not found" Jan 13 21:27:08.906849 kubelet[2499]: I0113 21:27:08.906822 2499 scope.go:117] "RemoveContainer" containerID="7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a" Jan 13 21:27:08.907022 containerd[1466]: time="2025-01-13T21:27:08.906985915Z" level=error msg="ContainerStatus for \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\": not found" Jan 13 21:27:08.907135 kubelet[2499]: E0113 21:27:08.907110 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\": not found" containerID="7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a" Jan 13 21:27:08.907135 kubelet[2499]: I0113 21:27:08.907130 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a"} err="failed to get container status \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c49ffa1b284c5c487a871a1dfe15170643180d88a479a7eb5ff033c2ec00c2a\": not found" Jan 13 21:27:09.115791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39eba55a3705d28a6c61a20c195e73c969d8650c30778e704bd095448faa93ab-rootfs.mount: Deactivated successfully. Jan 13 21:27:09.115922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60-rootfs.mount: Deactivated successfully. Jan 13 21:27:09.116029 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3e262ce9a61dd034c770611d2ce8d7bece99a9b35be38d5db7191947731cf60-shm.mount: Deactivated successfully. Jan 13 21:27:09.116134 systemd[1]: var-lib-kubelet-pods-07608cf7\x2d4cb1\x2d454d\x2db687\x2d59338432ac98-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7pncj.mount: Deactivated successfully. Jan 13 21:27:09.116238 systemd[1]: var-lib-kubelet-pods-0a89f52d\x2d034f\x2d4239\x2dab0d\x2d8327fda3fdbe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:27:09.116366 systemd[1]: var-lib-kubelet-pods-0a89f52d\x2d034f\x2d4239\x2dab0d\x2d8327fda3fdbe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drg2dz.mount: Deactivated successfully. Jan 13 21:27:09.116502 systemd[1]: var-lib-kubelet-pods-0a89f52d\x2d034f\x2d4239\x2dab0d\x2d8327fda3fdbe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:27:09.469818 kubelet[2499]: I0113 21:27:09.469776 2499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07608cf7-4cb1-454d-b687-59338432ac98" path="/var/lib/kubelet/pods/07608cf7-4cb1-454d-b687-59338432ac98/volumes" Jan 13 21:27:09.470487 kubelet[2499]: I0113 21:27:09.470466 2499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a89f52d-034f-4239-ab0d-8327fda3fdbe" path="/var/lib/kubelet/pods/0a89f52d-034f-4239-ab0d-8327fda3fdbe/volumes" Jan 13 21:27:10.079046 sshd[4155]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:10.096380 systemd[1]: sshd@24-10.0.0.102:22-10.0.0.1:47490.service: Deactivated successfully. Jan 13 21:27:10.098256 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:27:10.099853 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:27:10.105624 systemd[1]: Started sshd@25-10.0.0.102:22-10.0.0.1:49660.service - OpenSSH per-connection server daemon (10.0.0.1:49660). Jan 13 21:27:10.106594 systemd-logind[1447]: Removed session 24. Jan 13 21:27:10.139162 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 49660 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:10.140854 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:10.145109 systemd-logind[1447]: New session 25 of user core. Jan 13 21:27:10.153479 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:27:10.617219 sshd[4320]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:10.624417 systemd[1]: sshd@25-10.0.0.102:22-10.0.0.1:49660.service: Deactivated successfully. Jan 13 21:27:10.628168 kubelet[2499]: E0113 21:27:10.628107 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a89f52d-034f-4239-ab0d-8327fda3fdbe" containerName="mount-cgroup" Jan 13 21:27:10.628168 kubelet[2499]: E0113 21:27:10.628148 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a89f52d-034f-4239-ab0d-8327fda3fdbe" containerName="cilium-agent" Jan 13 21:27:10.628168 kubelet[2499]: E0113 21:27:10.628158 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07608cf7-4cb1-454d-b687-59338432ac98" containerName="cilium-operator" Jan 13 21:27:10.628168 kubelet[2499]: E0113 21:27:10.628167 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a89f52d-034f-4239-ab0d-8327fda3fdbe" containerName="clean-cilium-state" Jan 13 21:27:10.628168 kubelet[2499]: E0113 21:27:10.628177 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a89f52d-034f-4239-ab0d-8327fda3fdbe" containerName="apply-sysctl-overwrites" Jan 13 21:27:10.628897 kubelet[2499]: E0113 21:27:10.628187 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a89f52d-034f-4239-ab0d-8327fda3fdbe" containerName="mount-bpf-fs" Jan 13 21:27:10.628897 kubelet[2499]: I0113 21:27:10.628216 2499 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a89f52d-034f-4239-ab0d-8327fda3fdbe" containerName="cilium-agent" Jan 13 21:27:10.628897 kubelet[2499]: I0113 21:27:10.628226 2499 memory_manager.go:354] "RemoveStaleState removing state" podUID="07608cf7-4cb1-454d-b687-59338432ac98" containerName="cilium-operator" Jan 13 21:27:10.628259 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:27:10.630566 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:27:10.641486 systemd[1]: Started sshd@26-10.0.0.102:22-10.0.0.1:49674.service - OpenSSH per-connection server daemon (10.0.0.1:49674). Jan 13 21:27:10.643002 systemd-logind[1447]: Removed session 25. Jan 13 21:27:10.656129 kubelet[2499]: I0113 21:27:10.656072 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-host-proc-sys-kernel\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656129 kubelet[2499]: I0113 21:27:10.656124 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-host-proc-sys-net\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656293 kubelet[2499]: I0113 21:27:10.656151 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/76667857-3caa-4897-8355-e4dff99bd180-cilium-ipsec-secrets\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656293 kubelet[2499]: I0113 21:27:10.656173 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-cilium-cgroup\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656293 kubelet[2499]: I0113 21:27:10.656194 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-cni-path\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656293 kubelet[2499]: I0113 21:27:10.656212 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-cilium-run\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656293 kubelet[2499]: I0113 21:27:10.656230 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-etc-cni-netd\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656293 kubelet[2499]: I0113 21:27:10.656249 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-lib-modules\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656600 kubelet[2499]: I0113 21:27:10.656269 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76667857-3caa-4897-8355-e4dff99bd180-hubble-tls\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656600 kubelet[2499]: I0113 21:27:10.656288 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4d8\" (UniqueName: \"kubernetes.io/projected/76667857-3caa-4897-8355-e4dff99bd180-kube-api-access-wc4d8\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656600 kubelet[2499]: I0113 21:27:10.656310 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-xtables-lock\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656600 kubelet[2499]: I0113 21:27:10.656352 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76667857-3caa-4897-8355-e4dff99bd180-clustermesh-secrets\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656600 kubelet[2499]: I0113 21:27:10.656376 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-bpf-maps\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656600 kubelet[2499]: I0113 21:27:10.656396 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76667857-3caa-4897-8355-e4dff99bd180-hostproc\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.656732 kubelet[2499]: I0113 21:27:10.656415 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76667857-3caa-4897-8355-e4dff99bd180-cilium-config-path\") pod \"cilium-p7zhc\" (UID: \"76667857-3caa-4897-8355-e4dff99bd180\") " pod="kube-system/cilium-p7zhc" Jan 13 21:27:10.661315 systemd[1]: Created slice kubepods-burstable-pod76667857_3caa_4897_8355_e4dff99bd180.slice - libcontainer container kubepods-burstable-pod76667857_3caa_4897_8355_e4dff99bd180.slice. Jan 13 21:27:10.702476 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 49674 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:10.707074 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:10.712419 systemd-logind[1447]: New session 26 of user core. Jan 13 21:27:10.726464 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:27:10.778364 sshd[4333]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:10.790985 systemd[1]: sshd@26-10.0.0.102:22-10.0.0.1:49674.service: Deactivated successfully. Jan 13 21:27:10.793200 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:27:10.795415 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:27:10.801745 systemd[1]: Started sshd@27-10.0.0.102:22-10.0.0.1:49690.service - OpenSSH per-connection server daemon (10.0.0.1:49690). Jan 13 21:27:10.802712 systemd-logind[1447]: Removed session 26. Jan 13 21:27:10.832495 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 49690 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:10.834081 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:10.838267 systemd-logind[1447]: New session 27 of user core. Jan 13 21:27:10.845443 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:27:10.964923 kubelet[2499]: E0113 21:27:10.964880 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:10.965468 containerd[1466]: time="2025-01-13T21:27:10.965425485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7zhc,Uid:76667857-3caa-4897-8355-e4dff99bd180,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:10.985964 containerd[1466]: time="2025-01-13T21:27:10.985839962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:10.985964 containerd[1466]: time="2025-01-13T21:27:10.985891751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:10.985964 containerd[1466]: time="2025-01-13T21:27:10.985910126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:10.987086 containerd[1466]: time="2025-01-13T21:27:10.986992711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:11.005634 systemd[1]: Started cri-containerd-c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3.scope - libcontainer container c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3. Jan 13 21:27:11.027507 containerd[1466]: time="2025-01-13T21:27:11.027449885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7zhc,Uid:76667857-3caa-4897-8355-e4dff99bd180,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\"" Jan 13 21:27:11.028249 kubelet[2499]: E0113 21:27:11.028220 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.030294 containerd[1466]: time="2025-01-13T21:27:11.030223244Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:27:11.049848 containerd[1466]: time="2025-01-13T21:27:11.049699015Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837\"" Jan 13 21:27:11.050482 containerd[1466]: time="2025-01-13T21:27:11.050439626Z" level=info msg="StartContainer for \"c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837\"" Jan 13 21:27:11.077464 systemd[1]: Started cri-containerd-c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837.scope - libcontainer container c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837. Jan 13 21:27:11.100356 containerd[1466]: time="2025-01-13T21:27:11.100298518Z" level=info msg="StartContainer for \"c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837\" returns successfully" Jan 13 21:27:11.111750 systemd[1]: cri-containerd-c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837.scope: Deactivated successfully. Jan 13 21:27:11.145491 containerd[1466]: time="2025-01-13T21:27:11.145420095Z" level=info msg="shim disconnected" id=c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837 namespace=k8s.io Jan 13 21:27:11.145491 containerd[1466]: time="2025-01-13T21:27:11.145486902Z" level=warning msg="cleaning up after shim disconnected" id=c3aecec1da4611a9bcfaab9b4144aa89b39cd98a1b2cde6c84bc49da04a86837 namespace=k8s.io Jan 13 21:27:11.145491 containerd[1466]: time="2025-01-13T21:27:11.145495909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:11.866719 kubelet[2499]: E0113 21:27:11.866691 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.868489 containerd[1466]: time="2025-01-13T21:27:11.868441302Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:27:11.894930 containerd[1466]: time="2025-01-13T21:27:11.894883257Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7\"" Jan 13 21:27:11.895388 containerd[1466]: time="2025-01-13T21:27:11.895351960Z" level=info msg="StartContainer for \"ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7\"" Jan 13 21:27:11.930475 systemd[1]: Started cri-containerd-ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7.scope - libcontainer container ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7. Jan 13 21:27:11.954189 containerd[1466]: time="2025-01-13T21:27:11.954151611Z" level=info msg="StartContainer for \"ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7\" returns successfully" Jan 13 21:27:11.960295 systemd[1]: cri-containerd-ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7.scope: Deactivated successfully. Jan 13 21:27:11.989020 containerd[1466]: time="2025-01-13T21:27:11.988946436Z" level=info msg="shim disconnected" id=ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7 namespace=k8s.io Jan 13 21:27:11.989020 containerd[1466]: time="2025-01-13T21:27:11.989013293Z" level=warning msg="cleaning up after shim disconnected" id=ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7 namespace=k8s.io Jan 13 21:27:11.989511 containerd[1466]: time="2025-01-13T21:27:11.989025867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:12.467350 kubelet[2499]: E0113 21:27:12.467307 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:12.518633 kubelet[2499]: E0113 21:27:12.518572 2499 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:27:12.762218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba3a955646fee1b3988fefb1b1f29b2670a1868324139247238b98e3b3e372e7-rootfs.mount: Deactivated successfully. Jan 13 21:27:12.870174 kubelet[2499]: E0113 21:27:12.870142 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:12.871769 containerd[1466]: time="2025-01-13T21:27:12.871710429Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:27:12.899762 containerd[1466]: time="2025-01-13T21:27:12.899681428Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71\"" Jan 13 21:27:12.900281 containerd[1466]: time="2025-01-13T21:27:12.900250694Z" level=info msg="StartContainer for \"55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71\"" Jan 13 21:27:12.938524 systemd[1]: Started cri-containerd-55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71.scope - libcontainer container 55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71. Jan 13 21:27:12.970274 containerd[1466]: time="2025-01-13T21:27:12.970144396Z" level=info msg="StartContainer for \"55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71\" returns successfully" Jan 13 21:27:12.976011 systemd[1]: cri-containerd-55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71.scope: Deactivated successfully. Jan 13 21:27:13.003963 containerd[1466]: time="2025-01-13T21:27:13.003880302Z" level=info msg="shim disconnected" id=55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71 namespace=k8s.io Jan 13 21:27:13.003963 containerd[1466]: time="2025-01-13T21:27:13.003942711Z" level=warning msg="cleaning up after shim disconnected" id=55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71 namespace=k8s.io Jan 13 21:27:13.003963 containerd[1466]: time="2025-01-13T21:27:13.003954084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:13.762831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55cd0fe833207048503fb067f1e42d3df9e64b21660707cbee3e07787b88dc71-rootfs.mount: Deactivated successfully. Jan 13 21:27:13.873760 kubelet[2499]: E0113 21:27:13.873713 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:13.876116 containerd[1466]: time="2025-01-13T21:27:13.876052914Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:27:13.889255 containerd[1466]: time="2025-01-13T21:27:13.889208040Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11\"" Jan 13 21:27:13.889957 containerd[1466]: time="2025-01-13T21:27:13.889687063Z" level=info msg="StartContainer for \"87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11\"" Jan 13 21:27:13.919447 systemd[1]: Started cri-containerd-87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11.scope - libcontainer container 87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11. Jan 13 21:27:13.941080 systemd[1]: cri-containerd-87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11.scope: Deactivated successfully. Jan 13 21:27:13.943120 containerd[1466]: time="2025-01-13T21:27:13.943087094Z" level=info msg="StartContainer for \"87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11\" returns successfully" Jan 13 21:27:13.965271 containerd[1466]: time="2025-01-13T21:27:13.965210851Z" level=info msg="shim disconnected" id=87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11 namespace=k8s.io Jan 13 21:27:13.965271 containerd[1466]: time="2025-01-13T21:27:13.965263992Z" level=warning msg="cleaning up after shim disconnected" id=87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11 namespace=k8s.io Jan 13 21:27:13.965271 containerd[1466]: time="2025-01-13T21:27:13.965274682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:14.762409 systemd[1]: run-containerd-runc-k8s.io-87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11-runc.ti392P.mount: Deactivated successfully. Jan 13 21:27:14.762523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87033406170201213f649d845ae015b791e8fa81e386e236e5c0e5de5278bc11-rootfs.mount: Deactivated successfully. Jan 13 21:27:14.877228 kubelet[2499]: E0113 21:27:14.877192 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:14.879173 containerd[1466]: time="2025-01-13T21:27:14.879129162Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:27:14.897203 containerd[1466]: time="2025-01-13T21:27:14.897152362Z" level=info msg="CreateContainer within sandbox \"c4e307ffccfd3aff5b4c25927360448d93309699d1305cf40d075ae24f3ae4a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4bd05550036943c818055b2da3ec414e9541af37915a391d94c2776f0b04bf8\"" Jan 13 21:27:14.897708 containerd[1466]: time="2025-01-13T21:27:14.897673194Z" level=info msg="StartContainer for \"a4bd05550036943c818055b2da3ec414e9541af37915a391d94c2776f0b04bf8\"" Jan 13 21:27:14.930487 systemd[1]: Started cri-containerd-a4bd05550036943c818055b2da3ec414e9541af37915a391d94c2776f0b04bf8.scope - libcontainer container a4bd05550036943c818055b2da3ec414e9541af37915a391d94c2776f0b04bf8. Jan 13 21:27:14.961510 containerd[1466]: time="2025-01-13T21:27:14.961471844Z" level=info msg="StartContainer for \"a4bd05550036943c818055b2da3ec414e9541af37915a391d94c2776f0b04bf8\" returns successfully" Jan 13 21:27:15.374376 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:27:15.881785 kubelet[2499]: E0113 21:27:15.881758 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:15.893828 kubelet[2499]: I0113 21:27:15.893778 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p7zhc" podStartSLOduration=5.893761911 podStartE2EDuration="5.893761911s" podCreationTimestamp="2025-01-13 21:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:15.893177789 +0000 UTC m=+88.501713085" watchObservedRunningTime="2025-01-13 21:27:15.893761911 +0000 UTC m=+88.502297207" Jan 13 21:27:16.967291 kubelet[2499]: E0113 21:27:16.966476 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:17.249160 systemd[1]: run-containerd-runc-k8s.io-a4bd05550036943c818055b2da3ec414e9541af37915a391d94c2776f0b04bf8-runc.q9y1yW.mount: Deactivated successfully. Jan 13 21:27:17.468291 kubelet[2499]: E0113 21:27:17.468250 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:18.456838 systemd-networkd[1392]: lxc_health: Link UP Jan 13 21:27:18.468720 systemd-networkd[1392]: lxc_health: Gained carrier Jan 13 21:27:18.967147 kubelet[2499]: E0113 21:27:18.967112 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:19.696521 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jan 13 21:27:19.888553 kubelet[2499]: E0113 21:27:19.888507 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:20.467688 kubelet[2499]: E0113 21:27:20.467645 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:20.890118 kubelet[2499]: E0113 21:27:20.889993 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:21.551814 kubelet[2499]: E0113 21:27:21.551741 2499 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60608->127.0.0.1:39519: write tcp 127.0.0.1:60608->127.0.0.1:39519: write: broken pipe Jan 13 21:27:23.601637 systemd[1]: run-containerd-runc-k8s.io-a4bd05550036943c818055b2da3ec414e9541af37915a391d94c2776f0b04bf8-runc.f0A2QD.mount: Deactivated successfully. Jan 13 21:27:23.653852 sshd[4345]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:23.656956 systemd[1]: sshd@27-10.0.0.102:22-10.0.0.1:49690.service: Deactivated successfully. Jan 13 21:27:23.659456 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:27:23.661687 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:27:23.662723 systemd-logind[1447]: Removed session 27.