Jan 13 20:39:04.373015 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:39:04.373042 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:39:04.373056 kernel: BIOS-provided physical RAM map: Jan 13 20:39:04.373063 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:39:04.373071 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:39:04.373078 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:39:04.373087 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:39:04.373095 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:39:04.373103 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:39:04.373113 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:39:04.373121 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:39:04.373129 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:39:04.373141 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:39:04.373149 kernel: NX (Execute Disable) protection: active Jan 13 20:39:04.373159 kernel: APIC: Static calls initialized Jan 13 20:39:04.373173 kernel: SMBIOS 2.8 present. Jan 13 20:39:04.373181 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:39:04.373189 kernel: Hypervisor detected: KVM Jan 13 20:39:04.373198 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:39:04.373206 kernel: kvm-clock: using sched offset of 5280379648 cycles Jan 13 20:39:04.373215 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:39:04.373223 kernel: tsc: Detected 2794.748 MHz processor Jan 13 20:39:04.373232 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:39:04.373241 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:39:04.373252 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:39:04.373261 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:39:04.373270 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:39:04.373278 kernel: Using GB pages for direct mapping Jan 13 20:39:04.373287 kernel: ACPI: Early table checksum verification disabled Jan 13 20:39:04.373295 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:39:04.373304 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:39:04.373312 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:39:04.373321 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:39:04.373332 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:39:04.373341 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:39:04.373349 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:39:04.373358 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:39:04.373366 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:39:04.373375 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:39:04.373384 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:39:04.373397 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:39:04.373408 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:39:04.373417 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:39:04.373426 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:39:04.373435 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:39:04.373446 kernel: No NUMA configuration found Jan 13 20:39:04.373455 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:39:04.373467 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:39:04.375264 kernel: Zone ranges: Jan 13 20:39:04.375279 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:39:04.375289 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:39:04.375299 kernel: Normal empty Jan 13 20:39:04.375310 kernel: Movable zone start for each node Jan 13 20:39:04.375321 kernel: Early memory node ranges Jan 13 20:39:04.375331 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:39:04.375342 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:39:04.375352 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:39:04.375370 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:39:04.375386 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:39:04.375397 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:39:04.375407 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:39:04.375418 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:39:04.375429 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:39:04.375440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:39:04.375451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:39:04.375462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:39:04.375489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:39:04.375500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:39:04.375511 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:39:04.375521 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:39:04.375532 kernel: TSC deadline timer available Jan 13 20:39:04.375543 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:39:04.375554 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:39:04.375566 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:39:04.375581 kernel: kvm-guest: setup PV sched yield Jan 13 20:39:04.375599 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:39:04.375611 kernel: Booting paravirtualized kernel on KVM Jan 13 20:39:04.375622 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:39:04.375633 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:39:04.375644 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:39:04.375656 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:39:04.375667 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:39:04.375678 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:39:04.375690 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:39:04.375708 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:39:04.375720 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:39:04.375731 kernel: random: crng init done Jan 13 20:39:04.375742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:39:04.375753 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:39:04.375763 kernel: Fallback order for Node 0: 0 Jan 13 20:39:04.375775 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:39:04.375785 kernel: Policy zone: DMA32 Jan 13 20:39:04.375802 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:39:04.375813 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Jan 13 20:39:04.375827 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:39:04.375838 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:39:04.375910 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:39:04.375922 kernel: Dynamic Preempt: voluntary Jan 13 20:39:04.375933 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:39:04.375946 kernel: rcu: RCU event tracing is enabled. Jan 13 20:39:04.375958 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:39:04.375977 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:39:04.375988 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:39:04.375999 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:39:04.376011 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:39:04.376026 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:39:04.376038 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:39:04.376049 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:39:04.376060 kernel: Console: colour VGA+ 80x25 Jan 13 20:39:04.376070 kernel: printk: console [ttyS0] enabled Jan 13 20:39:04.376081 kernel: ACPI: Core revision 20230628 Jan 13 20:39:04.376097 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:39:04.376108 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:39:04.376118 kernel: x2apic enabled Jan 13 20:39:04.376129 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:39:04.376141 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:39:04.376153 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:39:04.376164 kernel: kvm-guest: setup PV IPIs Jan 13 20:39:04.376194 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:39:04.376206 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:39:04.376218 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 20:39:04.376230 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:39:04.376246 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:39:04.376259 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:39:04.376270 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:39:04.376282 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:39:04.376295 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:39:04.376311 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:39:04.376323 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:39:04.376340 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:39:04.376353 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:39:04.376365 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:39:04.376377 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:39:04.376390 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:39:04.376403 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:39:04.376420 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:39:04.376431 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:39:04.376442 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:39:04.376453 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:39:04.383584 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:39:04.383598 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:39:04.383608 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:39:04.383619 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:39:04.383629 kernel: landlock: Up and running. Jan 13 20:39:04.383650 kernel: SELinux: Initializing. Jan 13 20:39:04.383660 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:39:04.383674 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:39:04.383684 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:39:04.383695 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:39:04.383705 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:39:04.383715 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:39:04.383725 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:39:04.383743 kernel: ... version: 0 Jan 13 20:39:04.383758 kernel: ... bit width: 48 Jan 13 20:39:04.383770 kernel: ... generic registers: 6 Jan 13 20:39:04.383783 kernel: ... value mask: 0000ffffffffffff Jan 13 20:39:04.383796 kernel: ... max period: 00007fffffffffff Jan 13 20:39:04.383808 kernel: ... fixed-purpose events: 0 Jan 13 20:39:04.383820 kernel: ... event mask: 000000000000003f Jan 13 20:39:04.383833 kernel: signal: max sigframe size: 1776 Jan 13 20:39:04.383860 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:39:04.383874 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:39:04.383891 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:39:04.383903 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:39:04.383914 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:39:04.383924 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:39:04.383934 kernel: smpboot: Max logical packages: 1 Jan 13 20:39:04.383944 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 20:39:04.383954 kernel: devtmpfs: initialized Jan 13 20:39:04.383963 kernel: x86/mm: Memory block size: 128MB Jan 13 20:39:04.383973 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:39:04.383987 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:39:04.383997 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:39:04.384007 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:39:04.384017 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:39:04.384027 kernel: audit: type=2000 audit(1736800742.593:1): state=initialized audit_enabled=0 res=1 Jan 13 20:39:04.384038 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:39:04.384048 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:39:04.384058 kernel: cpuidle: using governor menu Jan 13 20:39:04.384068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:39:04.384081 kernel: dca service started, version 1.12.1 Jan 13 20:39:04.384091 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:39:04.384101 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:39:04.384111 kernel: PCI: Using configuration type 1 for base access Jan 13 20:39:04.384121 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:39:04.384131 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:39:04.384141 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:39:04.384151 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:39:04.384161 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:39:04.384174 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:39:04.384185 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:39:04.384195 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:39:04.384205 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:39:04.384215 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:39:04.384225 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:39:04.384234 kernel: ACPI: Interpreter enabled Jan 13 20:39:04.384244 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:39:04.384254 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:39:04.384268 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:39:04.384278 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:39:04.384288 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:39:04.384298 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:39:04.384646 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:39:04.384812 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:39:04.385017 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:39:04.385033 kernel: PCI host bridge to bus 0000:00 Jan 13 20:39:04.385228 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:39:04.385387 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:39:04.385554 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:39:04.385706 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:39:04.385883 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:39:04.386021 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:39:04.386523 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:39:04.386766 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:39:04.387050 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:39:04.387241 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:39:04.387415 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:39:04.387617 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:39:04.387818 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:39:04.388123 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:39:04.388329 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:39:04.388581 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:39:04.388752 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:39:04.388987 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:39:04.389141 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:39:04.389283 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:39:04.389902 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:39:04.390101 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:39:04.393603 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:39:04.393812 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:39:04.394062 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:39:04.394228 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:39:04.394411 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:39:04.394593 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:39:04.394780 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:39:04.395051 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:39:04.395217 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:39:04.395490 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:39:04.395681 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:39:04.395700 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:39:04.395719 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:39:04.395731 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:39:04.395743 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:39:04.395756 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:39:04.395768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:39:04.395781 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:39:04.395797 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:39:04.395812 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:39:04.395827 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:39:04.395963 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:39:04.395978 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:39:04.395991 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:39:04.396002 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:39:04.396014 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:39:04.396025 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:39:04.396036 kernel: iommu: Default domain type: Translated Jan 13 20:39:04.396047 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:39:04.396058 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:39:04.396073 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:39:04.396083 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:39:04.396092 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:39:04.397497 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:39:04.397663 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:39:04.397814 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:39:04.397829 kernel: vgaarb: loaded Jan 13 20:39:04.397860 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:39:04.397882 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:39:04.397894 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:39:04.397906 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:39:04.397919 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:39:04.397930 kernel: pnp: PnP ACPI init Jan 13 20:39:04.398101 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:39:04.398116 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:39:04.398126 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:39:04.398140 kernel: NET: Registered PF_INET protocol family Jan 13 20:39:04.398150 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:39:04.398160 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:39:04.398169 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:39:04.398179 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:39:04.398189 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:39:04.398199 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:39:04.398209 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:39:04.398219 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:39:04.398231 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:39:04.398241 kernel: NET: Registered PF_XDP protocol family Jan 13 20:39:04.398379 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:39:04.398521 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:39:04.398649 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:39:04.398778 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:39:04.398960 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:39:04.399208 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:39:04.399225 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:39:04.399235 kernel: Initialise system trusted keyrings Jan 13 20:39:04.399245 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:39:04.399255 kernel: Key type asymmetric registered Jan 13 20:39:04.399264 kernel: Asymmetric key parser 'x509' registered Jan 13 20:39:04.399274 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:39:04.399284 kernel: io scheduler mq-deadline registered Jan 13 20:39:04.399294 kernel: io scheduler kyber registered Jan 13 20:39:04.399304 kernel: io scheduler bfq registered Jan 13 20:39:04.399316 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:39:04.399327 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:39:04.399337 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:39:04.399347 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:39:04.399356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:39:04.399366 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:39:04.399376 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:39:04.399386 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:39:04.399396 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:39:04.402872 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:39:04.403044 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:39:04.403181 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:39:03 UTC (1736800743) Jan 13 20:39:04.403317 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:39:04.403331 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:39:04.403342 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:39:04.403353 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:39:04.403362 kernel: Segment Routing with IPv6 Jan 13 20:39:04.403376 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:39:04.403386 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:39:04.403396 kernel: Key type dns_resolver registered Jan 13 20:39:04.403406 kernel: IPI shorthand broadcast: enabled Jan 13 20:39:04.403416 kernel: sched_clock: Marking stable (1200004373, 141357068)->(1419381577, -78020136) Jan 13 20:39:04.403426 kernel: registered taskstats version 1 Jan 13 20:39:04.403435 kernel: Loading compiled-in X.509 certificates Jan 13 20:39:04.403445 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:39:04.403455 kernel: Key type .fscrypt registered Jan 13 20:39:04.403468 kernel: Key type fscrypt-provisioning registered Jan 13 20:39:04.403490 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:39:04.403500 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:39:04.403510 kernel: ima: No architecture policies found Jan 13 20:39:04.403520 kernel: clk: Disabling unused clocks Jan 13 20:39:04.403529 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:39:04.403539 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:39:04.403549 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:39:04.403559 kernel: Run /init as init process Jan 13 20:39:04.403572 kernel: with arguments: Jan 13 20:39:04.403582 kernel: /init Jan 13 20:39:04.403592 kernel: with environment: Jan 13 20:39:04.403601 kernel: HOME=/ Jan 13 20:39:04.403610 kernel: TERM=linux Jan 13 20:39:04.403620 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:39:04.403632 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:39:04.403644 systemd[1]: Detected virtualization kvm. Jan 13 20:39:04.403658 systemd[1]: Detected architecture x86-64. Jan 13 20:39:04.403668 systemd[1]: Running in initrd. Jan 13 20:39:04.403678 systemd[1]: No hostname configured, using default hostname. Jan 13 20:39:04.403688 systemd[1]: Hostname set to . Jan 13 20:39:04.403699 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:39:04.403709 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:39:04.403720 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:39:04.403730 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:39:04.403746 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:39:04.403771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:39:04.403785 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:39:04.403796 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:39:04.403812 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:39:04.403829 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:39:04.403859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:39:04.403873 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:39:04.403886 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:39:04.403899 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:39:04.403913 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:39:04.403926 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:39:04.403939 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:39:04.403952 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:39:04.403963 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:39:04.403974 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:39:04.403984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:39:04.403995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:39:04.404005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:39:04.404016 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:39:04.404026 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:39:04.404037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:39:04.404051 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:39:04.404061 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:39:04.404072 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:39:04.404083 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:39:04.404093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:39:04.404104 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:39:04.404114 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:39:04.404128 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:39:04.404172 systemd-journald[195]: Collecting audit messages is disabled. Jan 13 20:39:04.404202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:39:04.404216 systemd-journald[195]: Journal started Jan 13 20:39:04.404242 systemd-journald[195]: Runtime Journal (/run/log/journal/0fe48745927149cb9767cbe9e7d484d7) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:39:04.407628 systemd-modules-load[196]: Inserted module 'overlay' Jan 13 20:39:04.469877 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:39:04.477641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:39:04.479931 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:39:04.515254 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:39:04.515261 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:39:04.516460 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:39:04.525717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:39:04.543701 kernel: Bridge firewalling registered Jan 13 20:39:04.541881 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 13 20:39:04.545866 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:39:04.564220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:39:04.566213 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:39:04.572524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:39:04.577413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:39:04.585812 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:39:04.605231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:39:04.609017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:39:04.629979 dracut-cmdline[230]: dracut-dracut-053 Jan 13 20:39:04.640285 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:39:04.690231 systemd-resolved[231]: Positive Trust Anchors: Jan 13 20:39:04.690257 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:39:04.690310 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:39:04.708943 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 13 20:39:04.712729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:39:04.712996 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:39:04.827339 kernel: SCSI subsystem initialized Jan 13 20:39:04.842915 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:39:04.871637 kernel: iscsi: registered transport (tcp) Jan 13 20:39:04.912929 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:39:04.913033 kernel: QLogic iSCSI HBA Driver Jan 13 20:39:05.042336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:39:05.057235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:39:05.104963 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:39:05.105076 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:39:05.109907 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:39:05.200525 kernel: raid6: avx2x4 gen() 11273 MB/s Jan 13 20:39:05.216921 kernel: raid6: avx2x2 gen() 12619 MB/s Jan 13 20:39:05.236914 kernel: raid6: avx2x1 gen() 12667 MB/s Jan 13 20:39:05.237023 kernel: raid6: using algorithm avx2x1 gen() 12667 MB/s Jan 13 20:39:05.255762 kernel: raid6: .... xor() 7311 MB/s, rmw enabled Jan 13 20:39:05.255933 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:39:05.325949 kernel: xor: automatically using best checksumming function avx Jan 13 20:39:05.712584 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:39:05.772057 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:39:05.794112 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:39:05.826814 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 20:39:05.835635 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:39:05.865132 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:39:05.929705 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 13 20:39:05.993651 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:39:06.017321 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:39:06.137025 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:39:06.182575 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:39:06.216956 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:39:06.228703 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:39:06.233610 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:39:06.235277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:39:06.264955 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:39:06.278639 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:39:06.280344 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:39:06.287576 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:39:06.290952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:39:06.291195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:39:06.298454 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:39:06.304615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:39:06.308531 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:39:06.313885 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:39:06.328724 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:39:06.329163 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:39:06.329185 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:39:06.329203 kernel: GPT:9289727 != 19775487 Jan 13 20:39:06.329236 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:39:06.329255 kernel: GPT:9289727 != 19775487 Jan 13 20:39:06.329271 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:39:06.329288 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:39:06.349893 kernel: libata version 3.00 loaded. Jan 13 20:39:06.350898 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:39:06.355050 kernel: AES CTR mode by8 optimization enabled Jan 13 20:39:06.377441 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:39:06.442776 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:39:06.442806 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:39:06.443018 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:39:06.443203 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Jan 13 20:39:06.443219 kernel: scsi host0: ahci Jan 13 20:39:06.443446 kernel: scsi host1: ahci Jan 13 20:39:06.443677 kernel: scsi host2: ahci Jan 13 20:39:06.443922 kernel: scsi host3: ahci Jan 13 20:39:06.444132 kernel: scsi host4: ahci Jan 13 20:39:06.444338 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (478) Jan 13 20:39:06.444362 kernel: scsi host5: ahci Jan 13 20:39:06.444613 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:39:06.444634 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:39:06.444649 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:39:06.444664 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:39:06.444684 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:39:06.444699 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:39:06.425355 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:39:06.451115 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:39:06.453869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:39:06.469882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:39:06.477461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:39:06.477653 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:39:06.518253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:39:06.526303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:39:06.588819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:39:06.601239 disk-uuid[557]: Primary Header is updated. Jan 13 20:39:06.601239 disk-uuid[557]: Secondary Entries is updated. Jan 13 20:39:06.601239 disk-uuid[557]: Secondary Header is updated. Jan 13 20:39:06.612902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:39:06.628896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:39:06.751865 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:39:06.751941 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:39:06.757878 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:39:06.757947 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:39:06.757962 kernel: ata3.00: applying bridge limits Jan 13 20:39:06.759548 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:39:06.767481 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:39:06.767547 kernel: ata3.00: configured for UDMA/100 Jan 13 20:39:06.770894 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:39:06.783347 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:39:06.895021 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:39:06.908283 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:39:06.908312 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:39:07.634571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:39:07.645476 disk-uuid[566]: The operation has completed successfully. Jan 13 20:39:07.718444 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:39:07.718627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:39:07.770252 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:39:07.781799 sh[594]: Success Jan 13 20:39:07.832885 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:39:07.935528 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:39:07.970419 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:39:07.972187 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:39:08.025877 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:39:08.025958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:39:08.025979 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:39:08.027495 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:39:08.028613 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:39:08.072133 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:39:08.074048 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:39:08.094270 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:39:08.099807 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:39:08.124665 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:39:08.124742 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:39:08.124762 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:39:08.140151 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:39:08.158967 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:39:08.163381 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:39:08.227592 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:39:08.249469 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:39:08.500411 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:39:08.519201 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:39:08.597436 systemd-networkd[775]: lo: Link UP Jan 13 20:39:08.597454 systemd-networkd[775]: lo: Gained carrier Jan 13 20:39:08.600032 systemd-networkd[775]: Enumeration completed Jan 13 20:39:08.600623 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:39:08.600628 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:39:08.616809 ignition[698]: Ignition 2.20.0 Jan 13 20:39:08.604957 systemd-networkd[775]: eth0: Link UP Jan 13 20:39:08.616819 ignition[698]: Stage: fetch-offline Jan 13 20:39:08.604964 systemd-networkd[775]: eth0: Gained carrier Jan 13 20:39:08.616901 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:39:08.604981 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:39:08.616916 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:39:08.605589 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:39:08.617045 ignition[698]: parsed url from cmdline: "" Jan 13 20:39:08.614311 systemd[1]: Reached target network.target - Network. Jan 13 20:39:08.617051 ignition[698]: no config URL provided Jan 13 20:39:08.617058 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:39:08.617075 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:39:08.617122 ignition[698]: op(1): [started] loading QEMU firmware config module Jan 13 20:39:08.617129 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:39:08.650253 ignition[698]: op(1): [finished] loading QEMU firmware config module Jan 13 20:39:08.650292 ignition[698]: QEMU firmware config was not found. Ignoring... Jan 13 20:39:08.683024 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:39:08.730546 ignition[698]: parsing config with SHA512: c11e9866ff97e194e32c5f74007bd7f77cf1dc96689f2588c7361cf45a433d7a15cf389608b86d90f748c65bf00d36e748de2ac02a571b713b0d5c23657629c5 Jan 13 20:39:08.754450 unknown[698]: fetched base config from "system" Jan 13 20:39:08.757287 unknown[698]: fetched user config from "qemu" Jan 13 20:39:08.758457 ignition[698]: fetch-offline: fetch-offline passed Jan 13 20:39:08.758615 ignition[698]: Ignition finished successfully Jan 13 20:39:08.788435 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:39:08.791444 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:39:08.804154 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:39:08.843285 ignition[786]: Ignition 2.20.0 Jan 13 20:39:08.843302 ignition[786]: Stage: kargs Jan 13 20:39:08.843548 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:39:08.843562 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:39:08.846115 ignition[786]: kargs: kargs passed Jan 13 20:39:08.865713 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:39:08.846202 ignition[786]: Ignition finished successfully Jan 13 20:39:08.890985 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:39:08.950798 ignition[795]: Ignition 2.20.0 Jan 13 20:39:08.950819 ignition[795]: Stage: disks Jan 13 20:39:08.951099 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:39:08.951116 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:39:08.955902 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:39:08.952436 ignition[795]: disks: disks passed Jan 13 20:39:08.965155 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:39:08.952502 ignition[795]: Ignition finished successfully Jan 13 20:39:08.970372 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:39:08.977026 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:39:08.978579 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:39:08.984409 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:39:09.008343 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:39:09.057629 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:39:09.181268 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:39:09.205808 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:39:09.486991 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:39:09.488645 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:39:09.496101 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:39:09.514508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:39:09.532014 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:39:09.555599 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Jan 13 20:39:09.555641 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:39:09.555660 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:39:09.555676 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:39:09.540050 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:39:09.540132 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:39:09.540175 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:39:09.562950 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:39:09.571864 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:39:09.577113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:39:09.599806 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:39:09.727481 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:39:09.738411 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:39:09.747007 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:39:09.761508 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:39:10.086443 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:39:10.118033 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:39:10.144628 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:39:10.160265 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:39:10.159916 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:39:10.265042 ignition[929]: INFO : Ignition 2.20.0 Jan 13 20:39:10.270188 ignition[929]: INFO : Stage: mount Jan 13 20:39:10.270188 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:39:10.270188 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:39:10.284931 ignition[929]: INFO : mount: mount passed Jan 13 20:39:10.284931 ignition[929]: INFO : Ignition finished successfully Jan 13 20:39:10.276707 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:39:10.282987 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:39:10.288216 systemd-networkd[775]: eth0: Gained IPv6LL Jan 13 20:39:10.337400 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:39:10.512984 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:39:10.545517 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 13 20:39:10.564387 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:39:10.564472 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:39:10.564490 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:39:10.601329 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:39:10.603607 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:39:10.669177 ignition[959]: INFO : Ignition 2.20.0 Jan 13 20:39:10.669177 ignition[959]: INFO : Stage: files Jan 13 20:39:10.672219 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:39:10.672219 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:39:10.677879 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:39:10.683264 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:39:10.683264 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:39:10.714908 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:39:10.719480 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:39:10.722406 unknown[959]: wrote ssh authorized keys file for user: core Jan 13 20:39:10.724125 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:39:10.732445 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:39:10.732445 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:39:10.784515 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:39:10.987411 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:39:10.987411 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:39:10.994115 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:39:11.528129 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:39:11.951694 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:39:11.951694 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:39:11.961969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:39:12.338744 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:39:13.754351 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:39:13.754351 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:39:13.766125 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:39:13.768957 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:39:13.768957 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:39:13.768957 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:39:13.768957 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:39:13.768957 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:39:13.768957 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:39:13.768957 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:39:13.932838 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:39:14.001115 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:39:14.003254 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:39:14.003254 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:39:14.003254 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:39:14.003254 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:39:14.003254 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:39:14.003254 ignition[959]: INFO : files: files passed Jan 13 20:39:14.003254 ignition[959]: INFO : Ignition finished successfully Jan 13 20:39:14.017148 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:39:14.038356 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:39:14.045026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:39:14.050316 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:39:14.050552 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:39:14.087516 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:39:14.093337 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:39:14.093337 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:39:14.097467 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:39:14.102498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:39:14.104806 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:39:14.117313 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:39:14.182916 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:39:14.183118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:39:14.186579 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:39:14.191694 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:39:14.194655 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:39:14.196297 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:39:14.233700 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:39:14.265744 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:39:14.286395 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:39:14.288564 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:39:14.298710 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:39:14.301979 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:39:14.302223 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:39:14.306197 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:39:14.308789 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:39:14.314410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:39:14.321052 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:39:14.324098 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:39:14.332432 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:39:14.335584 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:39:14.342909 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:39:14.348611 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:39:14.351549 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:39:14.354530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:39:14.354792 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:39:14.362401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:39:14.365256 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:39:14.368339 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:39:14.369868 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:39:14.371789 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:39:14.371998 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:39:14.378550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:39:14.378776 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:39:14.383803 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:39:14.388439 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:39:14.398489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:39:14.403716 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:39:14.407764 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:39:14.411829 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:39:14.413148 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:39:14.416773 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:39:14.416973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:39:14.418361 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:39:14.418524 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:39:14.426307 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:39:14.426521 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:39:14.453355 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:39:14.464528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:39:14.466403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:39:14.466639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:39:14.468479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:39:14.468658 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:39:14.506099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:39:14.506296 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:39:14.553620 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:39:14.556252 ignition[1013]: INFO : Ignition 2.20.0 Jan 13 20:39:14.556252 ignition[1013]: INFO : Stage: umount Jan 13 20:39:14.558950 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:39:14.558950 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:39:14.558950 ignition[1013]: INFO : umount: umount passed Jan 13 20:39:14.558950 ignition[1013]: INFO : Ignition finished successfully Jan 13 20:39:14.564497 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:39:14.564684 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:39:14.569031 systemd[1]: Stopped target network.target - Network. Jan 13 20:39:14.572997 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:39:14.573135 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:39:14.575761 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:39:14.575861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:39:14.580774 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:39:14.580923 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:39:14.587005 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:39:14.587103 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:39:14.590078 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:39:14.592764 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:39:14.597115 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:39:14.597327 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:39:14.600404 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:39:14.600540 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:39:14.602586 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 13 20:39:14.610277 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:39:14.610558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:39:14.615457 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:39:14.615642 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:39:14.621728 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:39:14.621822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:39:14.637161 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:39:14.638727 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:39:14.638859 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:39:14.641598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:39:14.641719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:39:14.644291 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:39:14.644377 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:39:14.654560 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:39:14.654680 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:39:14.658159 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:39:14.675541 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:39:14.675827 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:39:14.681706 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:39:14.681902 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:39:14.688124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:39:14.688257 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:39:14.691059 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:39:14.691132 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:39:14.693798 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:39:14.693948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:39:14.696432 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:39:14.696532 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:39:14.701548 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:39:14.701671 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:39:14.715491 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:39:14.717425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:39:14.717560 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:39:14.724695 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:39:14.724827 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:39:14.727716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:39:14.727829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:39:14.731776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:39:14.732281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:39:14.737520 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:39:14.737710 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:39:14.743104 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:39:14.764464 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:39:14.776892 systemd[1]: Switching root. Jan 13 20:39:14.823619 systemd-journald[195]: Journal stopped Jan 13 20:39:17.150251 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 13 20:39:17.150337 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:39:17.150364 kernel: SELinux: policy capability open_perms=1 Jan 13 20:39:17.150382 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:39:17.150398 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:39:17.150415 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:39:17.150438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:39:17.150464 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:39:17.150492 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:39:17.150521 kernel: audit: type=1403 audit(1736800755.507:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:39:17.150538 systemd[1]: Successfully loaded SELinux policy in 76.795ms. Jan 13 20:39:17.150572 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.255ms. Jan 13 20:39:17.150592 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:39:17.150610 systemd[1]: Detected virtualization kvm. Jan 13 20:39:17.150628 systemd[1]: Detected architecture x86-64. Jan 13 20:39:17.150650 systemd[1]: Detected first boot. Jan 13 20:39:17.150670 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:39:17.150687 zram_generator::config[1057]: No configuration found. Jan 13 20:39:17.150717 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:39:17.150752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:39:17.150771 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:39:17.150795 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:39:17.150822 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:39:17.154611 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:39:17.154676 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:39:17.154712 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:39:17.154751 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:39:17.154785 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:39:17.154833 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:39:17.154875 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:39:17.154896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:39:17.154920 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:39:17.154949 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:39:17.154971 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:39:17.154999 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:39:17.155020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:39:17.155040 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:39:17.155060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:39:17.155080 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:39:17.155100 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:39:17.155120 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:39:17.155156 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:39:17.155177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:39:17.155197 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:39:17.155218 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:39:17.155238 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:39:17.155258 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:39:17.155277 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:39:17.155297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:39:17.155325 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:39:17.155345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:39:17.155366 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:39:17.155386 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:39:17.155407 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:39:17.155427 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:39:17.155447 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:17.155468 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:39:17.155492 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:39:17.155513 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:39:17.155532 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:39:17.155552 systemd[1]: Reached target machines.target - Containers. Jan 13 20:39:17.155572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:39:17.155592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:39:17.155611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:39:17.155632 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:39:17.155652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:39:17.155677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:39:17.155724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:39:17.155745 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:39:17.155766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:39:17.155789 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:39:17.155809 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:39:17.155830 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:39:17.155877 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:39:17.155910 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:39:17.155953 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:39:17.155980 kernel: loop: module loaded Jan 13 20:39:17.156003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:39:17.156023 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:39:17.156044 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:39:17.156064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:39:17.156085 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:39:17.156105 systemd[1]: Stopped verity-setup.service. Jan 13 20:39:17.156132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:17.159511 kernel: fuse: init (API version 7.39) Jan 13 20:39:17.159549 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:39:17.159597 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:39:17.159654 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:39:17.159714 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:39:17.159744 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:39:17.159801 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:39:17.159988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:39:17.160079 systemd-journald[1120]: Collecting audit messages is disabled. Jan 13 20:39:17.160162 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:39:17.160187 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:39:17.160225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:39:17.160256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:39:17.160292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:39:17.160330 systemd-journald[1120]: Journal started Jan 13 20:39:17.161408 systemd-journald[1120]: Runtime Journal (/run/log/journal/0fe48745927149cb9767cbe9e7d484d7) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:39:16.645186 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:39:16.677350 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:39:16.678375 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:39:17.165456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:39:17.169045 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:39:17.169122 kernel: ACPI: bus type drm_connector registered Jan 13 20:39:17.173074 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:39:17.173673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:39:17.176130 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:39:17.176634 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:39:17.178832 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:39:17.179331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:39:17.181643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:39:17.184174 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:39:17.190780 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:39:17.220612 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:39:17.232166 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:39:17.247980 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:39:17.250087 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:39:17.250161 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:39:17.253364 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:39:17.273510 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:39:17.278098 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:39:17.282756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:39:17.290673 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:39:17.295028 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:39:17.296796 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:39:17.302227 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:39:17.304639 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:39:17.306762 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:39:17.312952 systemd-journald[1120]: Time spent on flushing to /var/log/journal/0fe48745927149cb9767cbe9e7d484d7 is 38.427ms for 954 entries. Jan 13 20:39:17.312952 systemd-journald[1120]: System Journal (/var/log/journal/0fe48745927149cb9767cbe9e7d484d7) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:39:17.408281 systemd-journald[1120]: Received client request to flush runtime journal. Jan 13 20:39:17.408381 kernel: loop0: detected capacity change from 0 to 210664 Jan 13 20:39:17.311151 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:39:17.315144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:39:17.319486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:39:17.322217 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:39:17.331827 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:39:17.339948 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:39:17.347216 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:39:17.358061 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:39:17.370423 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:39:17.393194 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:39:17.413292 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:39:17.419819 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:39:17.422504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:39:17.442923 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:39:17.450696 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:39:17.463454 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 13 20:39:17.463477 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 13 20:39:17.476057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:39:17.495237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:39:17.498117 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:39:17.499247 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:39:17.531895 kernel: loop1: detected capacity change from 0 to 140992 Jan 13 20:39:17.578938 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:39:17.599194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:39:17.628910 kernel: loop2: detected capacity change from 0 to 138184 Jan 13 20:39:17.643773 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 13 20:39:17.643805 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 13 20:39:17.658254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:39:17.686269 kernel: loop3: detected capacity change from 0 to 210664 Jan 13 20:39:17.702896 kernel: loop4: detected capacity change from 0 to 140992 Jan 13 20:39:17.749231 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:39:17.794503 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:39:17.795608 (sd-merge)[1199]: Merged extensions into '/usr'. Jan 13 20:39:17.814598 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:39:17.816187 systemd[1]: Reloading... Jan 13 20:39:17.983895 zram_generator::config[1228]: No configuration found. Jan 13 20:39:18.279185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:39:18.379815 systemd[1]: Reloading finished in 560 ms. Jan 13 20:39:18.529194 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:39:18.580823 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:39:18.585595 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:39:18.618359 systemd[1]: Starting ensure-sysext.service... Jan 13 20:39:18.629493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:39:18.634810 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:39:18.634827 systemd[1]: Reloading... Jan 13 20:39:18.780888 zram_generator::config[1290]: No configuration found. Jan 13 20:39:18.781667 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:39:18.782341 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:39:18.783699 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:39:18.784224 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 13 20:39:18.784321 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 13 20:39:18.790368 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:39:18.790561 systemd-tmpfiles[1264]: Skipping /boot Jan 13 20:39:18.808654 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:39:18.808810 systemd-tmpfiles[1264]: Skipping /boot Jan 13 20:39:18.983779 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:39:19.065000 systemd[1]: Reloading finished in 429 ms. Jan 13 20:39:19.096127 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:39:19.119823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:39:19.149754 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:39:19.163397 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:39:19.171531 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:39:19.187454 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:39:19.196752 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:39:19.208204 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:39:19.220418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:19.220676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:39:19.228115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:39:19.256663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:39:19.263660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:39:19.265286 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:39:19.268181 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Jan 13 20:39:19.270239 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:39:19.273367 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:19.275164 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:39:19.278137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:39:19.284296 augenrules[1358]: No rules Jan 13 20:39:19.278377 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:39:19.281487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:39:19.282653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:39:19.293220 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:39:19.293539 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:39:19.296107 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:39:19.296358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:39:19.315624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:19.315944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:39:19.328563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:39:19.339817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:39:19.349726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:39:19.351639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:39:19.357443 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:39:19.359384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:19.361358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:39:19.370251 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:39:19.372832 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:39:19.373133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:39:19.380483 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:39:19.385107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:39:19.386508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:39:19.389679 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:39:19.389960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:39:19.407000 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:39:19.414863 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:39:19.438291 systemd[1]: Finished ensure-sysext.service. Jan 13 20:39:19.454636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:19.690740 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:39:19.692333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:39:19.699331 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:39:19.706982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:39:19.722967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1390) Jan 13 20:39:19.720421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:39:19.727239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:39:19.728784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:39:19.732358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:39:19.737153 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:39:19.739922 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:39:19.739973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:39:19.740578 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:39:19.750962 systemd-resolved[1338]: Positive Trust Anchors: Jan 13 20:39:19.750984 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:39:19.751023 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:39:19.756544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:39:19.756968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:39:19.759515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:39:19.759794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:39:19.762595 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:39:19.762927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:39:19.763896 systemd-resolved[1338]: Defaulting to hostname 'linux'. Jan 13 20:39:19.768587 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:39:19.771122 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:39:19.772792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:39:19.791701 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:39:19.792246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:39:19.796042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:39:19.798244 augenrules[1403]: /sbin/augenrules: No change Jan 13 20:39:19.812256 augenrules[1435]: No rules Jan 13 20:39:19.816328 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:39:19.816682 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:39:19.904886 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:39:19.921162 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:39:19.921014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:39:19.930156 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:39:19.942995 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:39:19.945169 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:39:19.953550 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:39:19.968898 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:39:20.019400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:39:20.019760 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:39:20.030454 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:39:20.035258 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:39:20.036455 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:39:20.043918 systemd-networkd[1414]: lo: Link UP Jan 13 20:39:20.043935 systemd-networkd[1414]: lo: Gained carrier Jan 13 20:39:20.046444 systemd-networkd[1414]: Enumeration completed Jan 13 20:39:20.047130 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:39:20.047145 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:39:20.047612 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:39:20.050261 systemd-networkd[1414]: eth0: Link UP Jan 13 20:39:20.050280 systemd-networkd[1414]: eth0: Gained carrier Jan 13 20:39:20.050304 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:39:20.054050 systemd[1]: Reached target network.target - Network. Jan 13 20:39:20.073446 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:39:20.077968 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:39:20.081479 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Jan 13 20:39:20.087264 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:39:20.089109 systemd-timesyncd[1416]: Initial clock synchronization to Mon 2025-01-13 20:39:20.409167 UTC. Jan 13 20:39:20.217879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:39:20.280536 kernel: kvm_amd: TSC scaling supported Jan 13 20:39:20.280677 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:39:20.280819 kernel: kvm_amd: Nested Paging enabled Jan 13 20:39:20.281157 kernel: kvm_amd: LBR virtualization supported Jan 13 20:39:20.282618 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:39:20.282674 kernel: kvm_amd: Virtual GIF supported Jan 13 20:39:20.366891 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:39:20.403874 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:39:20.423373 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:39:20.450117 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:39:20.489318 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:39:20.493633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:39:20.496532 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:39:20.498661 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:39:20.500737 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:39:20.503226 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:39:20.505207 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:39:20.507382 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:39:20.509407 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:39:20.509463 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:39:20.511173 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:39:20.513794 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:39:20.519706 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:39:20.535792 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:39:20.544552 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:39:20.547489 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:39:20.549276 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:39:20.550755 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:39:20.553741 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:39:20.553783 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:39:20.555102 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:39:20.556150 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:39:20.572356 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:39:20.577616 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:39:20.581552 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:39:20.583966 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:39:20.589502 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:39:20.594061 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:39:20.596874 jq[1469]: false Jan 13 20:39:20.604127 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:39:20.611635 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:39:20.624201 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:39:20.630151 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:39:20.631390 dbus-daemon[1468]: [system] SELinux support is enabled Jan 13 20:39:20.634686 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:39:20.641974 extend-filesystems[1470]: Found loop3 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found loop4 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found loop5 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found sr0 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda1 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda2 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda3 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found usr Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda4 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda6 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda7 Jan 13 20:39:20.641974 extend-filesystems[1470]: Found vda9 Jan 13 20:39:20.641974 extend-filesystems[1470]: Checking size of /dev/vda9 Jan 13 20:39:20.647303 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:39:20.656328 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:39:20.674389 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:39:20.680936 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:39:20.685503 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:39:20.685873 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:39:20.689776 update_engine[1484]: I20250113 20:39:20.688951 1484 main.cc:92] Flatcar Update Engine starting Jan 13 20:39:20.686416 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:39:20.686723 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:39:20.693571 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:39:20.693948 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:39:20.701657 extend-filesystems[1470]: Resized partition /dev/vda9 Jan 13 20:39:20.705562 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:39:20.714793 update_engine[1484]: I20250113 20:39:20.714560 1484 update_check_scheduler.cc:74] Next update check in 8m36s Jan 13 20:39:20.723721 jq[1488]: true Jan 13 20:39:20.727608 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:39:20.727681 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:39:20.728133 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:39:20.736239 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:39:20.883066 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:39:20.883119 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:39:20.900923 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:39:20.901069 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1390) Jan 13 20:39:20.923538 tar[1492]: linux-amd64/helm Jan 13 20:39:20.963310 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:39:20.980833 jq[1503]: true Jan 13 20:39:20.990871 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:39:20.993363 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:39:20.993363 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:39:20.993363 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:39:21.003734 extend-filesystems[1470]: Resized filesystem in /dev/vda9 Jan 13 20:39:20.994193 systemd-logind[1481]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:39:20.994222 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:39:20.998221 systemd-logind[1481]: New seat seat0. Jan 13 20:39:21.018181 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:39:21.020641 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:39:21.024115 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:39:21.024423 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:39:21.149072 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:39:21.203688 bash[1525]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:39:21.214869 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:39:21.219735 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:39:21.458765 containerd[1494]: time="2025-01-13T20:39:21.458525656Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:39:21.516875 containerd[1494]: time="2025-01-13T20:39:21.516466953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531001983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531066767Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531092688Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531395373Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531417072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531503577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531518418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531808596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531829336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531847346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:39:21.531918 containerd[1494]: time="2025-01-13T20:39:21.531860583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:39:21.532553 containerd[1494]: time="2025-01-13T20:39:21.532527234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:39:21.533037 containerd[1494]: time="2025-01-13T20:39:21.533014288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:39:21.533274 containerd[1494]: time="2025-01-13T20:39:21.533249947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:39:21.533349 containerd[1494]: time="2025-01-13T20:39:21.533333368Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:39:21.533541 containerd[1494]: time="2025-01-13T20:39:21.533521208Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:39:21.533697 containerd[1494]: time="2025-01-13T20:39:21.533677408Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:39:21.558958 containerd[1494]: time="2025-01-13T20:39:21.558878086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:39:21.559823 containerd[1494]: time="2025-01-13T20:39:21.559297354Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:39:21.559823 containerd[1494]: time="2025-01-13T20:39:21.559413469Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:39:21.559823 containerd[1494]: time="2025-01-13T20:39:21.559441777Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:39:21.559823 containerd[1494]: time="2025-01-13T20:39:21.559462558Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:39:21.559823 containerd[1494]: time="2025-01-13T20:39:21.559754549Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:39:21.562625 containerd[1494]: time="2025-01-13T20:39:21.562585934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:39:21.563095 containerd[1494]: time="2025-01-13T20:39:21.563067662Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:39:21.563216 containerd[1494]: time="2025-01-13T20:39:21.563191792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563300549Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563334087Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563384010Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563409139Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563431255Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563453027Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563470932Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563486566Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563502856Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563529965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563552331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563594865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563617199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564012 containerd[1494]: time="2025-01-13T20:39:21.563636772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563656752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563674156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563696304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563718337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563740339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563760725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563780089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563797995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563832263Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563865125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563922812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.564409 containerd[1494]: time="2025-01-13T20:39:21.563944230Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:39:21.564841 containerd[1494]: time="2025-01-13T20:39:21.564812574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:39:21.565097 containerd[1494]: time="2025-01-13T20:39:21.565069286Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:39:21.566460 containerd[1494]: time="2025-01-13T20:39:21.565178949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:39:21.566460 containerd[1494]: time="2025-01-13T20:39:21.565208298Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:39:21.566460 containerd[1494]: time="2025-01-13T20:39:21.565224995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.566460 containerd[1494]: time="2025-01-13T20:39:21.565245767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:39:21.566460 containerd[1494]: time="2025-01-13T20:39:21.565274427Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:39:21.566460 containerd[1494]: time="2025-01-13T20:39:21.565300775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:39:21.566638 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:39:21.566900 containerd[1494]: time="2025-01-13T20:39:21.565716823Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:39:21.566900 containerd[1494]: time="2025-01-13T20:39:21.565799586Z" level=info msg="Connect containerd service" Jan 13 20:39:21.566900 containerd[1494]: time="2025-01-13T20:39:21.565842349Z" level=info msg="using legacy CRI server" Jan 13 20:39:21.566900 containerd[1494]: time="2025-01-13T20:39:21.565854783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:39:21.566900 containerd[1494]: time="2025-01-13T20:39:21.566027418Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:39:21.567891 containerd[1494]: time="2025-01-13T20:39:21.567784461Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:39:21.568390 containerd[1494]: time="2025-01-13T20:39:21.568349787Z" level=info msg="Start subscribing containerd event" Jan 13 20:39:21.568497 containerd[1494]: time="2025-01-13T20:39:21.568477648Z" level=info msg="Start recovering state" Jan 13 20:39:21.568645 containerd[1494]: time="2025-01-13T20:39:21.568622748Z" level=info msg="Start event monitor" Jan 13 20:39:21.568737 containerd[1494]: time="2025-01-13T20:39:21.568712661Z" level=info msg="Start snapshots syncer" Jan 13 20:39:21.568815 containerd[1494]: time="2025-01-13T20:39:21.568796498Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:39:21.568946 containerd[1494]: time="2025-01-13T20:39:21.568916500Z" level=info msg="Start streaming server" Jan 13 20:39:21.569562 containerd[1494]: time="2025-01-13T20:39:21.569537304Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:39:21.569810 containerd[1494]: time="2025-01-13T20:39:21.569786241Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:39:21.588921 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:39:21.591944 containerd[1494]: time="2025-01-13T20:39:21.589628315Z" level=info msg="containerd successfully booted in 0.132818s" Jan 13 20:39:21.616147 systemd-networkd[1414]: eth0: Gained IPv6LL Jan 13 20:39:21.621135 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:39:21.624003 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:39:21.657643 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:39:21.667107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:21.701503 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:39:21.705288 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:39:21.777136 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:39:21.790573 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:60990.service - OpenSSH per-connection server daemon (10.0.0.1:60990). Jan 13 20:39:21.797363 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:39:21.802328 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:39:21.802665 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:39:21.809717 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:39:21.810062 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:39:21.831951 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:39:21.865220 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:39:21.909436 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:39:21.934944 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:39:21.946272 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:39:21.950575 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:39:22.024216 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 60990 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:22.027753 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:22.050073 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:39:22.069147 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:39:22.073919 systemd-logind[1481]: New session 1 of user core. Jan 13 20:39:22.113705 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:39:22.351730 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:39:22.359783 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:39:22.408897 tar[1492]: linux-amd64/LICENSE Jan 13 20:39:22.408897 tar[1492]: linux-amd64/README.md Jan 13 20:39:22.557049 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:39:22.738726 systemd[1578]: Queued start job for default target default.target. Jan 13 20:39:22.751121 systemd[1578]: Created slice app.slice - User Application Slice. Jan 13 20:39:22.751156 systemd[1578]: Reached target paths.target - Paths. Jan 13 20:39:22.751176 systemd[1578]: Reached target timers.target - Timers. Jan 13 20:39:22.762030 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:39:22.811756 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:39:22.811902 systemd[1578]: Reached target sockets.target - Sockets. Jan 13 20:39:22.811927 systemd[1578]: Reached target basic.target - Basic System. Jan 13 20:39:22.812017 systemd[1578]: Reached target default.target - Main User Target. Jan 13 20:39:22.812080 systemd[1578]: Startup finished in 432ms. Jan 13 20:39:22.812267 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:39:22.830363 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:39:22.918082 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Jan 13 20:39:23.056318 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:23.062670 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:23.079498 systemd-logind[1481]: New session 2 of user core. Jan 13 20:39:23.095211 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:39:23.371988 sshd[1594]: Connection closed by 10.0.0.1 port 32774 Jan 13 20:39:23.375347 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:23.405712 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:32774.service: Deactivated successfully. Jan 13 20:39:23.415025 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:39:23.422113 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:39:23.438675 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:32786.service - OpenSSH per-connection server daemon (10.0.0.1:32786). Jan 13 20:39:23.444997 systemd-logind[1481]: Removed session 2. Jan 13 20:39:23.515719 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 32786 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:23.517164 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:23.530754 systemd-logind[1481]: New session 3 of user core. Jan 13 20:39:23.539377 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:39:23.631968 sshd[1601]: Connection closed by 10.0.0.1 port 32786 Jan 13 20:39:23.633074 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:23.647938 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:32786.service: Deactivated successfully. Jan 13 20:39:23.651066 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:39:23.686518 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:39:23.689053 systemd-logind[1481]: Removed session 3. Jan 13 20:39:24.426333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:24.430603 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:39:24.456173 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:39:24.460045 systemd[1]: Startup finished in 1.479s (kernel) + 11.602s (initrd) + 9.027s (userspace) = 22.110s. Jan 13 20:39:25.760694 kubelet[1609]: E0113 20:39:25.760574 1609 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:39:25.766804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:39:25.767151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:39:25.767724 systemd[1]: kubelet.service: Consumed 2.623s CPU time. Jan 13 20:39:33.806553 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:53640.service - OpenSSH per-connection server daemon (10.0.0.1:53640). Jan 13 20:39:33.852718 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 53640 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:33.854360 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:33.858610 systemd-logind[1481]: New session 4 of user core. Jan 13 20:39:33.867023 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:39:33.926456 sshd[1627]: Connection closed by 10.0.0.1 port 53640 Jan 13 20:39:33.926828 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:33.937789 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:53640.service: Deactivated successfully. Jan 13 20:39:33.939877 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:39:33.941489 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:39:33.956155 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:53642.service - OpenSSH per-connection server daemon (10.0.0.1:53642). Jan 13 20:39:33.957285 systemd-logind[1481]: Removed session 4. Jan 13 20:39:33.993877 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 53642 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:33.995932 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:34.000837 systemd-logind[1481]: New session 5 of user core. Jan 13 20:39:34.018039 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:39:34.071202 sshd[1634]: Connection closed by 10.0.0.1 port 53642 Jan 13 20:39:34.071979 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:34.079973 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:53642.service: Deactivated successfully. Jan 13 20:39:34.082142 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:39:34.083988 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:39:34.085506 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:53652.service - OpenSSH per-connection server daemon (10.0.0.1:53652). Jan 13 20:39:34.086396 systemd-logind[1481]: Removed session 5. Jan 13 20:39:34.128549 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 53652 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:34.130454 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:34.134412 systemd-logind[1481]: New session 6 of user core. Jan 13 20:39:34.143000 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:39:34.199038 sshd[1641]: Connection closed by 10.0.0.1 port 53652 Jan 13 20:39:34.199450 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:34.216271 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:53652.service: Deactivated successfully. Jan 13 20:39:34.218302 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:39:34.219767 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:39:34.234323 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:53668.service - OpenSSH per-connection server daemon (10.0.0.1:53668). Jan 13 20:39:34.235559 systemd-logind[1481]: Removed session 6. Jan 13 20:39:34.273495 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 53668 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:34.275103 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:34.279521 systemd-logind[1481]: New session 7 of user core. Jan 13 20:39:34.289018 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:39:34.348569 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:39:34.348973 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:39:34.368454 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 13 20:39:34.370246 sshd[1648]: Connection closed by 10.0.0.1 port 53668 Jan 13 20:39:34.370735 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:34.380024 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:53668.service: Deactivated successfully. Jan 13 20:39:34.382165 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:39:34.383912 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:39:34.395334 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:53672.service - OpenSSH per-connection server daemon (10.0.0.1:53672). Jan 13 20:39:34.396317 systemd-logind[1481]: Removed session 7. Jan 13 20:39:34.433443 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 53672 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:34.434842 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:34.439302 systemd-logind[1481]: New session 8 of user core. Jan 13 20:39:34.448987 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:39:34.502963 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:39:34.503323 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:39:34.507708 sudo[1658]: pam_unix(sudo:session): session closed for user root Jan 13 20:39:34.514973 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:39:34.515312 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:39:34.535148 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:39:34.571644 augenrules[1680]: No rules Jan 13 20:39:34.573582 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:39:34.573835 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:39:34.575141 sudo[1657]: pam_unix(sudo:session): session closed for user root Jan 13 20:39:34.576668 sshd[1656]: Connection closed by 10.0.0.1 port 53672 Jan 13 20:39:34.577033 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:34.587419 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:53672.service: Deactivated successfully. Jan 13 20:39:34.589295 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:39:34.590950 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:39:34.592246 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:53686.service - OpenSSH per-connection server daemon (10.0.0.1:53686). Jan 13 20:39:34.593104 systemd-logind[1481]: Removed session 8. Jan 13 20:39:34.634620 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 53686 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:39:34.636127 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:34.639911 systemd-logind[1481]: New session 9 of user core. Jan 13 20:39:34.656980 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:39:34.711132 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:39:34.711491 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:39:35.287092 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:39:35.287325 (dockerd)[1711]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:39:36.017264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:39:36.030006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:36.167819 dockerd[1711]: time="2025-01-13T20:39:36.167744415Z" level=info msg="Starting up" Jan 13 20:39:36.251812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:36.262167 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:39:36.639214 kubelet[1734]: E0113 20:39:36.639143 1734 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:39:36.646621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:39:36.646890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:39:36.677312 dockerd[1711]: time="2025-01-13T20:39:36.677250224Z" level=info msg="Loading containers: start." Jan 13 20:39:37.094910 kernel: Initializing XFRM netlink socket Jan 13 20:39:37.185765 systemd-networkd[1414]: docker0: Link UP Jan 13 20:39:37.379435 dockerd[1711]: time="2025-01-13T20:39:37.379374147Z" level=info msg="Loading containers: done." Jan 13 20:39:37.425215 dockerd[1711]: time="2025-01-13T20:39:37.425131154Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:39:37.425422 dockerd[1711]: time="2025-01-13T20:39:37.425286369Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:39:37.425536 dockerd[1711]: time="2025-01-13T20:39:37.425501469Z" level=info msg="Daemon has completed initialization" Jan 13 20:39:37.501422 dockerd[1711]: time="2025-01-13T20:39:37.501289375Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:39:37.502277 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:39:38.409591 containerd[1494]: time="2025-01-13T20:39:38.409520361Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:39:39.187562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820173747.mount: Deactivated successfully. Jan 13 20:39:40.682897 containerd[1494]: time="2025-01-13T20:39:40.682793246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:40.683972 containerd[1494]: time="2025-01-13T20:39:40.683896170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 20:39:40.685561 containerd[1494]: time="2025-01-13T20:39:40.685490704Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:40.689969 containerd[1494]: time="2025-01-13T20:39:40.689887957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:40.691318 containerd[1494]: time="2025-01-13T20:39:40.691272191Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.281681144s" Jan 13 20:39:40.691366 containerd[1494]: time="2025-01-13T20:39:40.691318696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 20:39:40.718649 containerd[1494]: time="2025-01-13T20:39:40.718596127Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:39:42.681013 containerd[1494]: time="2025-01-13T20:39:42.680942849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:42.684937 containerd[1494]: time="2025-01-13T20:39:42.682703337Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 20:39:42.685118 containerd[1494]: time="2025-01-13T20:39:42.685008739Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:42.690445 containerd[1494]: time="2025-01-13T20:39:42.690385218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:42.691293 containerd[1494]: time="2025-01-13T20:39:42.691259039Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.972612296s" Jan 13 20:39:42.691359 containerd[1494]: time="2025-01-13T20:39:42.691293517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 20:39:42.716865 containerd[1494]: time="2025-01-13T20:39:42.716785321Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:39:44.762415 containerd[1494]: time="2025-01-13T20:39:44.761814314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:44.763083 containerd[1494]: time="2025-01-13T20:39:44.762987688Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 20:39:44.765194 containerd[1494]: time="2025-01-13T20:39:44.765112134Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:44.771792 containerd[1494]: time="2025-01-13T20:39:44.770103952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:44.771792 containerd[1494]: time="2025-01-13T20:39:44.771677521Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 2.054859485s" Jan 13 20:39:44.771792 containerd[1494]: time="2025-01-13T20:39:44.771725601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 20:39:44.825535 containerd[1494]: time="2025-01-13T20:39:44.825188483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:39:46.480783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897692443.mount: Deactivated successfully. Jan 13 20:39:46.725487 containerd[1494]: time="2025-01-13T20:39:46.725416891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:46.726156 containerd[1494]: time="2025-01-13T20:39:46.726107912Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 20:39:46.727206 containerd[1494]: time="2025-01-13T20:39:46.727174672Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:46.729101 containerd[1494]: time="2025-01-13T20:39:46.729069637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:46.729609 containerd[1494]: time="2025-01-13T20:39:46.729577883Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.90433732s" Jan 13 20:39:46.729642 containerd[1494]: time="2025-01-13T20:39:46.729606890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 20:39:46.751933 containerd[1494]: time="2025-01-13T20:39:46.751787463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:39:46.897418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:39:46.906035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:47.055831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:47.060576 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:39:47.117235 kubelet[2036]: E0113 20:39:47.117100 2036 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:39:47.122310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:39:47.122547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:39:47.458722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837881291.mount: Deactivated successfully. Jan 13 20:39:49.283042 containerd[1494]: time="2025-01-13T20:39:49.282973593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:49.283765 containerd[1494]: time="2025-01-13T20:39:49.283693531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:39:49.284936 containerd[1494]: time="2025-01-13T20:39:49.284900758Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:49.287731 containerd[1494]: time="2025-01-13T20:39:49.287679834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:49.288810 containerd[1494]: time="2025-01-13T20:39:49.288773058Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.536948436s" Jan 13 20:39:49.288810 containerd[1494]: time="2025-01-13T20:39:49.288804548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:39:49.312502 containerd[1494]: time="2025-01-13T20:39:49.312314902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:39:49.836511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519004413.mount: Deactivated successfully. Jan 13 20:39:49.845467 containerd[1494]: time="2025-01-13T20:39:49.845422190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:49.846216 containerd[1494]: time="2025-01-13T20:39:49.846148926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:39:49.847460 containerd[1494]: time="2025-01-13T20:39:49.847425430Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:49.850443 containerd[1494]: time="2025-01-13T20:39:49.850409526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:49.851559 containerd[1494]: time="2025-01-13T20:39:49.851515797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 539.168845ms" Jan 13 20:39:49.851607 containerd[1494]: time="2025-01-13T20:39:49.851560052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:39:49.876027 containerd[1494]: time="2025-01-13T20:39:49.875816068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:39:50.388943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562063092.mount: Deactivated successfully. Jan 13 20:39:53.477440 containerd[1494]: time="2025-01-13T20:39:53.477350011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:53.478232 containerd[1494]: time="2025-01-13T20:39:53.478155649Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 20:39:53.480107 containerd[1494]: time="2025-01-13T20:39:53.480063970Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:53.484141 containerd[1494]: time="2025-01-13T20:39:53.484088720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:53.485616 containerd[1494]: time="2025-01-13T20:39:53.485566473Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.609715801s" Jan 13 20:39:53.485616 containerd[1494]: time="2025-01-13T20:39:53.485596573Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 20:39:56.034637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:56.049166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:56.070097 systemd[1]: Reloading requested from client PID 2230 ('systemctl') (unit session-9.scope)... Jan 13 20:39:56.070116 systemd[1]: Reloading... Jan 13 20:39:56.172879 zram_generator::config[2269]: No configuration found. Jan 13 20:39:56.510021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:39:56.588200 systemd[1]: Reloading finished in 517 ms. Jan 13 20:39:56.640510 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:39:56.640642 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:39:56.641029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:56.644135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:56.803300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:56.814257 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:39:56.857677 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:56.857677 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:39:56.857677 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:56.858242 kubelet[2318]: I0113 20:39:56.857704 2318 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:39:57.202443 kubelet[2318]: I0113 20:39:57.202369 2318 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:39:57.202443 kubelet[2318]: I0113 20:39:57.202423 2318 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:39:57.202676 kubelet[2318]: I0113 20:39:57.202653 2318 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:39:57.216186 kubelet[2318]: I0113 20:39:57.216133 2318 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:39:57.219242 kubelet[2318]: E0113 20:39:57.218901 2318 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.231561 kubelet[2318]: I0113 20:39:57.231516 2318 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:39:57.233234 kubelet[2318]: I0113 20:39:57.233188 2318 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:39:57.233434 kubelet[2318]: I0113 20:39:57.233227 2318 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:39:57.233556 kubelet[2318]: I0113 20:39:57.233458 2318 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:39:57.233556 kubelet[2318]: I0113 20:39:57.233468 2318 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:39:57.233625 kubelet[2318]: I0113 20:39:57.233610 2318 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:57.234236 kubelet[2318]: I0113 20:39:57.234209 2318 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:39:57.234236 kubelet[2318]: I0113 20:39:57.234228 2318 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:39:57.234311 kubelet[2318]: I0113 20:39:57.234260 2318 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:39:57.234311 kubelet[2318]: I0113 20:39:57.234283 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:39:57.236811 kubelet[2318]: W0113 20:39:57.236747 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.236811 kubelet[2318]: E0113 20:39:57.236815 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.236930 kubelet[2318]: W0113 20:39:57.236741 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.236930 kubelet[2318]: E0113 20:39:57.236835 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.238326 kubelet[2318]: I0113 20:39:57.238303 2318 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:39:57.239494 kubelet[2318]: I0113 20:39:57.239465 2318 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:39:57.239554 kubelet[2318]: W0113 20:39:57.239531 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:39:57.240246 kubelet[2318]: I0113 20:39:57.240229 2318 server.go:1264] "Started kubelet" Jan 13 20:39:57.241969 kubelet[2318]: I0113 20:39:57.241491 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:39:57.243070 kubelet[2318]: I0113 20:39:57.242653 2318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:39:57.243855 kubelet[2318]: I0113 20:39:57.243815 2318 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:39:57.244583 kubelet[2318]: I0113 20:39:57.244518 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:39:57.245125 kubelet[2318]: I0113 20:39:57.244805 2318 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:39:57.245125 kubelet[2318]: I0113 20:39:57.244966 2318 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:39:57.245125 kubelet[2318]: I0113 20:39:57.245054 2318 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:39:57.245317 kubelet[2318]: I0113 20:39:57.245296 2318 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:39:57.245686 kubelet[2318]: W0113 20:39:57.245640 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.245686 kubelet[2318]: E0113 20:39:57.245684 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.246184 kubelet[2318]: E0113 20:39:57.246090 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5b1f851e2dd4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:39:57.240208852 +0000 UTC m=+0.421741136,LastTimestamp:2025-01-13 20:39:57.240208852 +0000 UTC m=+0.421741136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:39:57.247018 kubelet[2318]: E0113 20:39:57.246893 2318 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:39:57.247224 kubelet[2318]: I0113 20:39:57.247204 2318 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:39:57.247309 kubelet[2318]: I0113 20:39:57.247288 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:39:57.247475 kubelet[2318]: E0113 20:39:57.247404 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jan 13 20:39:57.248403 kubelet[2318]: I0113 20:39:57.248384 2318 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:39:57.263697 kubelet[2318]: I0113 20:39:57.263588 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:39:57.266920 kubelet[2318]: I0113 20:39:57.266519 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:39:57.266920 kubelet[2318]: I0113 20:39:57.266569 2318 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:39:57.266920 kubelet[2318]: I0113 20:39:57.266592 2318 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:39:57.266920 kubelet[2318]: E0113 20:39:57.266641 2318 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:39:57.267762 kubelet[2318]: W0113 20:39:57.267731 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.267967 kubelet[2318]: E0113 20:39:57.267769 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:57.269508 kubelet[2318]: I0113 20:39:57.269490 2318 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:39:57.269508 kubelet[2318]: I0113 20:39:57.269505 2318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:39:57.269581 kubelet[2318]: I0113 20:39:57.269521 2318 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:57.346324 kubelet[2318]: I0113 20:39:57.346283 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:57.346697 kubelet[2318]: E0113 20:39:57.346669 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 20:39:57.367006 kubelet[2318]: E0113 20:39:57.366954 2318 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:39:57.448971 kubelet[2318]: E0113 20:39:57.448929 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jan 13 20:39:57.548519 kubelet[2318]: I0113 20:39:57.548378 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:57.548772 kubelet[2318]: E0113 20:39:57.548744 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 20:39:57.568050 kubelet[2318]: E0113 20:39:57.568001 2318 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:39:57.680589 kubelet[2318]: I0113 20:39:57.680511 2318 policy_none.go:49] "None policy: Start" Jan 13 20:39:57.681320 kubelet[2318]: I0113 20:39:57.681301 2318 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:39:57.681392 kubelet[2318]: I0113 20:39:57.681328 2318 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:39:57.687627 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:39:57.704221 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:39:57.708032 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:39:57.722965 kubelet[2318]: I0113 20:39:57.722933 2318 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:39:57.723222 kubelet[2318]: I0113 20:39:57.723178 2318 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:39:57.723328 kubelet[2318]: I0113 20:39:57.723314 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:39:57.724645 kubelet[2318]: E0113 20:39:57.724621 2318 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:39:57.850011 kubelet[2318]: E0113 20:39:57.849950 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jan 13 20:39:57.950503 kubelet[2318]: I0113 20:39:57.950464 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:57.951095 kubelet[2318]: E0113 20:39:57.950923 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 20:39:57.969069 kubelet[2318]: I0113 20:39:57.968992 2318 topology_manager.go:215] "Topology Admit Handler" podUID="506d450d93b4f419d1c810cfa493107e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:39:57.970718 kubelet[2318]: I0113 20:39:57.970109 2318 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:39:57.971290 kubelet[2318]: I0113 20:39:57.971249 2318 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:39:57.976451 systemd[1]: Created slice kubepods-burstable-pod506d450d93b4f419d1c810cfa493107e.slice - libcontainer container kubepods-burstable-pod506d450d93b4f419d1c810cfa493107e.slice. Jan 13 20:39:57.989175 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 20:39:57.993418 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 20:39:58.049600 kubelet[2318]: I0113 20:39:58.049530 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:58.049600 kubelet[2318]: I0113 20:39:58.049578 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:58.049778 kubelet[2318]: I0113 20:39:58.049608 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:58.049778 kubelet[2318]: I0113 20:39:58.049663 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/506d450d93b4f419d1c810cfa493107e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"506d450d93b4f419d1c810cfa493107e\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:58.049778 kubelet[2318]: I0113 20:39:58.049698 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/506d450d93b4f419d1c810cfa493107e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"506d450d93b4f419d1c810cfa493107e\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:58.049778 kubelet[2318]: I0113 20:39:58.049713 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/506d450d93b4f419d1c810cfa493107e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"506d450d93b4f419d1c810cfa493107e\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:58.049778 kubelet[2318]: I0113 20:39:58.049728 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:58.049931 kubelet[2318]: I0113 20:39:58.049744 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:58.049931 kubelet[2318]: I0113 20:39:58.049757 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:39:58.107170 kubelet[2318]: W0113 20:39:58.106998 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.107170 kubelet[2318]: E0113 20:39:58.107044 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.285740 kubelet[2318]: E0113 20:39:58.285677 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:58.286295 containerd[1494]: time="2025-01-13T20:39:58.286254236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:506d450d93b4f419d1c810cfa493107e,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:58.291422 kubelet[2318]: E0113 20:39:58.291397 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:58.292139 containerd[1494]: time="2025-01-13T20:39:58.292080873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:58.296313 kubelet[2318]: E0113 20:39:58.296275 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:58.296605 containerd[1494]: time="2025-01-13T20:39:58.296559291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:58.587156 kubelet[2318]: W0113 20:39:58.586925 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.587156 kubelet[2318]: E0113 20:39:58.587009 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.612536 kubelet[2318]: W0113 20:39:58.612456 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.612536 kubelet[2318]: E0113 20:39:58.612531 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.651325 kubelet[2318]: E0113 20:39:58.651253 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Jan 13 20:39:58.752306 kubelet[2318]: I0113 20:39:58.752255 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:58.752751 kubelet[2318]: E0113 20:39:58.752698 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 20:39:58.768335 kubelet[2318]: W0113 20:39:58.768250 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.768335 kubelet[2318]: E0113 20:39:58.768332 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:58.864356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870359436.mount: Deactivated successfully. Jan 13 20:39:58.871169 containerd[1494]: time="2025-01-13T20:39:58.871124453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:58.873884 containerd[1494]: time="2025-01-13T20:39:58.873813651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:39:58.874968 containerd[1494]: time="2025-01-13T20:39:58.874901731Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:58.876884 containerd[1494]: time="2025-01-13T20:39:58.876831195Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:58.877872 containerd[1494]: time="2025-01-13T20:39:58.877818668Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:39:58.878969 containerd[1494]: time="2025-01-13T20:39:58.878926115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:58.880316 containerd[1494]: time="2025-01-13T20:39:58.880281290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:58.881213 containerd[1494]: time="2025-01-13T20:39:58.881174181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.825673ms" Jan 13 20:39:58.881320 containerd[1494]: time="2025-01-13T20:39:58.881196566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:39:58.884877 containerd[1494]: time="2025-01-13T20:39:58.884341954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.148846ms" Jan 13 20:39:58.890628 containerd[1494]: time="2025-01-13T20:39:58.890579839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.94029ms" Jan 13 20:39:59.002779 containerd[1494]: time="2025-01-13T20:39:59.002661167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:59.002779 containerd[1494]: time="2025-01-13T20:39:59.002736019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:59.002779 containerd[1494]: time="2025-01-13T20:39:59.002749732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:59.002969 containerd[1494]: time="2025-01-13T20:39:59.002911835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:59.004670 containerd[1494]: time="2025-01-13T20:39:59.004600496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:59.004670 containerd[1494]: time="2025-01-13T20:39:59.004645796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:59.004894 containerd[1494]: time="2025-01-13T20:39:59.004659880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:59.004894 containerd[1494]: time="2025-01-13T20:39:59.004855012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:59.006253 containerd[1494]: time="2025-01-13T20:39:59.002107847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:59.006794 containerd[1494]: time="2025-01-13T20:39:59.006283694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:59.006794 containerd[1494]: time="2025-01-13T20:39:59.006346978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:59.006898 containerd[1494]: time="2025-01-13T20:39:59.006730677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:59.032094 systemd[1]: Started cri-containerd-6dd0f2e7306c52e07f94d1e76fcab76dcb636ffcfc0a04946fbbb37f24c192ad.scope - libcontainer container 6dd0f2e7306c52e07f94d1e76fcab76dcb636ffcfc0a04946fbbb37f24c192ad. Jan 13 20:39:59.033675 systemd[1]: Started cri-containerd-7dccb43c463b9b1660c928cfda18e183a9ed70062e00d0459deaf470236173d5.scope - libcontainer container 7dccb43c463b9b1660c928cfda18e183a9ed70062e00d0459deaf470236173d5. Jan 13 20:39:59.039528 systemd[1]: Started cri-containerd-f6351d50b4ed99c6e0a181f2489a7e4397786155503c66d9c14ae01217ce0720.scope - libcontainer container f6351d50b4ed99c6e0a181f2489a7e4397786155503c66d9c14ae01217ce0720. Jan 13 20:39:59.078085 containerd[1494]: time="2025-01-13T20:39:59.077431341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dccb43c463b9b1660c928cfda18e183a9ed70062e00d0459deaf470236173d5\"" Jan 13 20:39:59.080642 kubelet[2318]: E0113 20:39:59.080604 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:59.085259 containerd[1494]: time="2025-01-13T20:39:59.085213517Z" level=info msg="CreateContainer within sandbox \"7dccb43c463b9b1660c928cfda18e183a9ed70062e00d0459deaf470236173d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:39:59.085375 containerd[1494]: time="2025-01-13T20:39:59.085356593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6351d50b4ed99c6e0a181f2489a7e4397786155503c66d9c14ae01217ce0720\"" Jan 13 20:39:59.087762 kubelet[2318]: E0113 20:39:59.087735 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:59.088209 containerd[1494]: time="2025-01-13T20:39:59.088178029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:506d450d93b4f419d1c810cfa493107e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dd0f2e7306c52e07f94d1e76fcab76dcb636ffcfc0a04946fbbb37f24c192ad\"" Jan 13 20:39:59.089085 kubelet[2318]: E0113 20:39:59.089069 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:59.090737 containerd[1494]: time="2025-01-13T20:39:59.090642359Z" level=info msg="CreateContainer within sandbox \"6dd0f2e7306c52e07f94d1e76fcab76dcb636ffcfc0a04946fbbb37f24c192ad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:39:59.093894 containerd[1494]: time="2025-01-13T20:39:59.093560460Z" level=info msg="CreateContainer within sandbox \"f6351d50b4ed99c6e0a181f2489a7e4397786155503c66d9c14ae01217ce0720\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:39:59.117343 containerd[1494]: time="2025-01-13T20:39:59.117223892Z" level=info msg="CreateContainer within sandbox \"7dccb43c463b9b1660c928cfda18e183a9ed70062e00d0459deaf470236173d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cd6c42ef0f0975e91602c131ee055e797094b56cd986aa34032efa2cd156f9a\"" Jan 13 20:39:59.118012 containerd[1494]: time="2025-01-13T20:39:59.117975773Z" level=info msg="StartContainer for \"0cd6c42ef0f0975e91602c131ee055e797094b56cd986aa34032efa2cd156f9a\"" Jan 13 20:39:59.124540 containerd[1494]: time="2025-01-13T20:39:59.124465527Z" level=info msg="CreateContainer within sandbox \"6dd0f2e7306c52e07f94d1e76fcab76dcb636ffcfc0a04946fbbb37f24c192ad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"acfe63b1ced3dc94d9646b260254f4de788a2ec4a79e292f2dd9474fa859a8ed\"" Jan 13 20:39:59.126030 containerd[1494]: time="2025-01-13T20:39:59.124987140Z" level=info msg="StartContainer for \"acfe63b1ced3dc94d9646b260254f4de788a2ec4a79e292f2dd9474fa859a8ed\"" Jan 13 20:39:59.126115 containerd[1494]: time="2025-01-13T20:39:59.126080961Z" level=info msg="CreateContainer within sandbox \"f6351d50b4ed99c6e0a181f2489a7e4397786155503c66d9c14ae01217ce0720\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da3c845d4f3c8c7bf3cc5375a6e6bea4913b9268b01a7195c52944faf6667f87\"" Jan 13 20:39:59.126407 containerd[1494]: time="2025-01-13T20:39:59.126381499Z" level=info msg="StartContainer for \"da3c845d4f3c8c7bf3cc5375a6e6bea4913b9268b01a7195c52944faf6667f87\"" Jan 13 20:39:59.153490 systemd[1]: Started cri-containerd-0cd6c42ef0f0975e91602c131ee055e797094b56cd986aa34032efa2cd156f9a.scope - libcontainer container 0cd6c42ef0f0975e91602c131ee055e797094b56cd986aa34032efa2cd156f9a. Jan 13 20:39:59.161162 systemd[1]: Started cri-containerd-acfe63b1ced3dc94d9646b260254f4de788a2ec4a79e292f2dd9474fa859a8ed.scope - libcontainer container acfe63b1ced3dc94d9646b260254f4de788a2ec4a79e292f2dd9474fa859a8ed. Jan 13 20:39:59.165737 systemd[1]: Started cri-containerd-da3c845d4f3c8c7bf3cc5375a6e6bea4913b9268b01a7195c52944faf6667f87.scope - libcontainer container da3c845d4f3c8c7bf3cc5375a6e6bea4913b9268b01a7195c52944faf6667f87. Jan 13 20:39:59.234873 kubelet[2318]: E0113 20:39:59.234735 2318 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 20:39:59.245310 containerd[1494]: time="2025-01-13T20:39:59.245195452Z" level=info msg="StartContainer for \"acfe63b1ced3dc94d9646b260254f4de788a2ec4a79e292f2dd9474fa859a8ed\" returns successfully" Jan 13 20:39:59.257997 containerd[1494]: time="2025-01-13T20:39:59.257915441Z" level=info msg="StartContainer for \"0cd6c42ef0f0975e91602c131ee055e797094b56cd986aa34032efa2cd156f9a\" returns successfully" Jan 13 20:39:59.258203 containerd[1494]: time="2025-01-13T20:39:59.258071699Z" level=info msg="StartContainer for \"da3c845d4f3c8c7bf3cc5375a6e6bea4913b9268b01a7195c52944faf6667f87\" returns successfully" Jan 13 20:39:59.274872 kubelet[2318]: E0113 20:39:59.274722 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:59.280355 kubelet[2318]: E0113 20:39:59.279518 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:59.286129 kubelet[2318]: E0113 20:39:59.286073 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:00.289453 kubelet[2318]: E0113 20:40:00.289391 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:00.358148 kubelet[2318]: I0113 20:40:00.357578 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:40:00.454977 kubelet[2318]: E0113 20:40:00.454929 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:40:00.554292 kubelet[2318]: I0113 20:40:00.554153 2318 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:40:01.238802 kubelet[2318]: I0113 20:40:01.238759 2318 apiserver.go:52] "Watching apiserver" Jan 13 20:40:01.245574 kubelet[2318]: I0113 20:40:01.245540 2318 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:40:01.293984 kubelet[2318]: E0113 20:40:01.293948 2318 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 20:40:01.294474 kubelet[2318]: E0113 20:40:01.294340 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:02.239667 systemd[1]: Reloading requested from client PID 2598 ('systemctl') (unit session-9.scope)... Jan 13 20:40:02.239682 systemd[1]: Reloading... Jan 13 20:40:02.320534 zram_generator::config[2640]: No configuration found. Jan 13 20:40:02.424592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:40:02.516483 systemd[1]: Reloading finished in 276 ms. Jan 13 20:40:02.560593 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:02.578221 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:40:02.578488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:02.595232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:02.740786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:02.746569 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:40:02.788670 kubelet[2682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:40:02.788670 kubelet[2682]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:40:02.788670 kubelet[2682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:40:02.788670 kubelet[2682]: I0113 20:40:02.788627 2682 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:40:02.796555 kubelet[2682]: I0113 20:40:02.796522 2682 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:40:02.796555 kubelet[2682]: I0113 20:40:02.796550 2682 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:40:02.796755 kubelet[2682]: I0113 20:40:02.796738 2682 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:40:02.797937 kubelet[2682]: I0113 20:40:02.797911 2682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:40:02.799009 kubelet[2682]: I0113 20:40:02.798992 2682 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:40:02.806615 kubelet[2682]: I0113 20:40:02.806581 2682 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:40:02.806856 kubelet[2682]: I0113 20:40:02.806807 2682 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:40:02.807028 kubelet[2682]: I0113 20:40:02.806837 2682 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:40:02.807131 kubelet[2682]: I0113 20:40:02.807042 2682 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:40:02.807131 kubelet[2682]: I0113 20:40:02.807052 2682 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:40:02.807131 kubelet[2682]: I0113 20:40:02.807096 2682 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:40:02.807196 kubelet[2682]: I0113 20:40:02.807191 2682 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:40:02.807219 kubelet[2682]: I0113 20:40:02.807202 2682 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:40:02.807242 kubelet[2682]: I0113 20:40:02.807222 2682 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:40:02.807242 kubelet[2682]: I0113 20:40:02.807239 2682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:40:02.808020 kubelet[2682]: I0113 20:40:02.807998 2682 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:40:02.808172 kubelet[2682]: I0113 20:40:02.808155 2682 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:40:02.809791 kubelet[2682]: I0113 20:40:02.809146 2682 server.go:1264] "Started kubelet" Jan 13 20:40:02.814004 kubelet[2682]: I0113 20:40:02.813970 2682 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:40:02.814578 kubelet[2682]: I0113 20:40:02.814536 2682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:40:02.814839 kubelet[2682]: I0113 20:40:02.814821 2682 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:40:02.816725 kubelet[2682]: I0113 20:40:02.815767 2682 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:40:02.816725 kubelet[2682]: I0113 20:40:02.816426 2682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:40:02.818213 kubelet[2682]: I0113 20:40:02.818182 2682 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:40:02.818323 kubelet[2682]: I0113 20:40:02.818298 2682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:40:02.818801 kubelet[2682]: I0113 20:40:02.818773 2682 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:40:02.819550 kubelet[2682]: I0113 20:40:02.819093 2682 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:40:02.819550 kubelet[2682]: I0113 20:40:02.819242 2682 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:40:02.824025 kubelet[2682]: I0113 20:40:02.823000 2682 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:40:02.825330 kubelet[2682]: E0113 20:40:02.824834 2682 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:40:02.831885 kubelet[2682]: I0113 20:40:02.831564 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:40:02.833012 kubelet[2682]: I0113 20:40:02.832985 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:40:02.833061 kubelet[2682]: I0113 20:40:02.833016 2682 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:40:02.833061 kubelet[2682]: I0113 20:40:02.833035 2682 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:40:02.833112 kubelet[2682]: E0113 20:40:02.833075 2682 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:40:02.866337 kubelet[2682]: I0113 20:40:02.866023 2682 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:40:02.866337 kubelet[2682]: I0113 20:40:02.866047 2682 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:40:02.866337 kubelet[2682]: I0113 20:40:02.866070 2682 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:40:02.866337 kubelet[2682]: I0113 20:40:02.866218 2682 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:40:02.866337 kubelet[2682]: I0113 20:40:02.866228 2682 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:40:02.866337 kubelet[2682]: I0113 20:40:02.866261 2682 policy_none.go:49] "None policy: Start" Jan 13 20:40:02.867190 kubelet[2682]: I0113 20:40:02.867174 2682 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:40:02.867252 kubelet[2682]: I0113 20:40:02.867221 2682 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:40:02.867368 kubelet[2682]: I0113 20:40:02.867354 2682 state_mem.go:75] "Updated machine memory state" Jan 13 20:40:02.871768 kubelet[2682]: I0113 20:40:02.871739 2682 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:40:02.871984 kubelet[2682]: I0113 20:40:02.871948 2682 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:40:02.872062 kubelet[2682]: I0113 20:40:02.872046 2682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:40:02.923305 kubelet[2682]: I0113 20:40:02.923260 2682 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:40:02.929632 kubelet[2682]: I0113 20:40:02.929596 2682 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:40:02.929763 kubelet[2682]: I0113 20:40:02.929686 2682 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:40:02.933920 kubelet[2682]: I0113 20:40:02.933839 2682 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:40:02.934108 kubelet[2682]: I0113 20:40:02.933955 2682 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:40:02.934108 kubelet[2682]: I0113 20:40:02.934010 2682 topology_manager.go:215] "Topology Admit Handler" podUID="506d450d93b4f419d1c810cfa493107e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:40:03.120332 kubelet[2682]: I0113 20:40:03.120237 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:40:03.120332 kubelet[2682]: I0113 20:40:03.120306 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:40:03.120332 kubelet[2682]: I0113 20:40:03.120334 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/506d450d93b4f419d1c810cfa493107e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"506d450d93b4f419d1c810cfa493107e\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:40:03.120762 kubelet[2682]: I0113 20:40:03.120388 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/506d450d93b4f419d1c810cfa493107e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"506d450d93b4f419d1c810cfa493107e\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:40:03.120762 kubelet[2682]: I0113 20:40:03.120427 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:40:03.120762 kubelet[2682]: I0113 20:40:03.120448 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:40:03.120762 kubelet[2682]: I0113 20:40:03.120469 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:40:03.120762 kubelet[2682]: I0113 20:40:03.120491 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/506d450d93b4f419d1c810cfa493107e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"506d450d93b4f419d1c810cfa493107e\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:40:03.120972 kubelet[2682]: I0113 20:40:03.120514 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:40:03.239961 kubelet[2682]: E0113 20:40:03.239693 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:03.239961 kubelet[2682]: E0113 20:40:03.239746 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:03.239961 kubelet[2682]: E0113 20:40:03.239774 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:03.241207 sudo[2721]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:40:03.241584 sudo[2721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:40:03.744092 sudo[2721]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:03.808221 kubelet[2682]: I0113 20:40:03.808152 2682 apiserver.go:52] "Watching apiserver" Jan 13 20:40:03.820011 kubelet[2682]: I0113 20:40:03.819918 2682 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:40:03.846709 kubelet[2682]: E0113 20:40:03.846474 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:03.853403 kubelet[2682]: E0113 20:40:03.853143 2682 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:40:03.853647 kubelet[2682]: E0113 20:40:03.853609 2682 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 20:40:03.854067 kubelet[2682]: E0113 20:40:03.854046 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:03.854971 kubelet[2682]: E0113 20:40:03.854563 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:03.869353 kubelet[2682]: I0113 20:40:03.868955 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8689371110000002 podStartE2EDuration="1.868937111s" podCreationTimestamp="2025-01-13 20:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:03.868423741 +0000 UTC m=+1.117462301" watchObservedRunningTime="2025-01-13 20:40:03.868937111 +0000 UTC m=+1.117975661" Jan 13 20:40:03.886354 kubelet[2682]: I0113 20:40:03.885570 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.885551427 podStartE2EDuration="1.885551427s" podCreationTimestamp="2025-01-13 20:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:03.875005686 +0000 UTC m=+1.124044246" watchObservedRunningTime="2025-01-13 20:40:03.885551427 +0000 UTC m=+1.134589977" Jan 13 20:40:03.886354 kubelet[2682]: I0113 20:40:03.885678 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8856703929999998 podStartE2EDuration="1.885670393s" podCreationTimestamp="2025-01-13 20:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:03.884517912 +0000 UTC m=+1.133556462" watchObservedRunningTime="2025-01-13 20:40:03.885670393 +0000 UTC m=+1.134708963" Jan 13 20:40:04.847374 kubelet[2682]: E0113 20:40:04.847316 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:04.847879 kubelet[2682]: E0113 20:40:04.847423 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:05.417710 sudo[1691]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:05.419475 sshd[1690]: Connection closed by 10.0.0.1 port 53686 Jan 13 20:40:05.420240 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:05.424821 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:53686.service: Deactivated successfully. Jan 13 20:40:05.427013 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:40:05.427251 systemd[1]: session-9.scope: Consumed 5.212s CPU time, 187.5M memory peak, 0B memory swap peak. Jan 13 20:40:05.427788 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:40:05.428899 systemd-logind[1481]: Removed session 9. Jan 13 20:40:06.351495 update_engine[1484]: I20250113 20:40:06.351326 1484 update_attempter.cc:509] Updating boot flags... Jan 13 20:40:06.659870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2770) Jan 13 20:40:06.696895 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2772) Jan 13 20:40:08.772558 kubelet[2682]: E0113 20:40:08.772505 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:10.154079 kubelet[2682]: E0113 20:40:10.154023 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:10.856896 kubelet[2682]: E0113 20:40:10.856516 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:11.142215 kubelet[2682]: E0113 20:40:11.142038 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:11.857800 kubelet[2682]: E0113 20:40:11.857715 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:16.861584 kubelet[2682]: I0113 20:40:16.861528 2682 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:40:16.862147 kubelet[2682]: I0113 20:40:16.862108 2682 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:40:16.862205 containerd[1494]: time="2025-01-13T20:40:16.861935372Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:40:17.795019 kubelet[2682]: I0113 20:40:17.794798 2682 topology_manager.go:215] "Topology Admit Handler" podUID="9d299980-d1c1-4814-a29a-c2513c6bdeaa" podNamespace="kube-system" podName="kube-proxy-szt4m" Jan 13 20:40:17.799554 kubelet[2682]: I0113 20:40:17.799340 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d299980-d1c1-4814-a29a-c2513c6bdeaa-kube-proxy\") pod \"kube-proxy-szt4m\" (UID: \"9d299980-d1c1-4814-a29a-c2513c6bdeaa\") " pod="kube-system/kube-proxy-szt4m" Jan 13 20:40:17.799554 kubelet[2682]: I0113 20:40:17.799403 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d299980-d1c1-4814-a29a-c2513c6bdeaa-xtables-lock\") pod \"kube-proxy-szt4m\" (UID: \"9d299980-d1c1-4814-a29a-c2513c6bdeaa\") " pod="kube-system/kube-proxy-szt4m" Jan 13 20:40:17.799554 kubelet[2682]: I0113 20:40:17.799430 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d299980-d1c1-4814-a29a-c2513c6bdeaa-lib-modules\") pod \"kube-proxy-szt4m\" (UID: \"9d299980-d1c1-4814-a29a-c2513c6bdeaa\") " pod="kube-system/kube-proxy-szt4m" Jan 13 20:40:17.799554 kubelet[2682]: I0113 20:40:17.799540 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62f7k\" (UniqueName: \"kubernetes.io/projected/9d299980-d1c1-4814-a29a-c2513c6bdeaa-kube-api-access-62f7k\") pod \"kube-proxy-szt4m\" (UID: \"9d299980-d1c1-4814-a29a-c2513c6bdeaa\") " pod="kube-system/kube-proxy-szt4m" Jan 13 20:40:17.807117 kubelet[2682]: I0113 20:40:17.807003 2682 topology_manager.go:215] "Topology Admit Handler" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" podNamespace="kube-system" podName="cilium-wzw9c" Jan 13 20:40:17.809099 systemd[1]: Created slice kubepods-besteffort-pod9d299980_d1c1_4814_a29a_c2513c6bdeaa.slice - libcontainer container kubepods-besteffort-pod9d299980_d1c1_4814_a29a_c2513c6bdeaa.slice. Jan 13 20:40:17.826997 systemd[1]: Created slice kubepods-burstable-pod4a908f06_106f_4b55_bc43_021bd5c26297.slice - libcontainer container kubepods-burstable-pod4a908f06_106f_4b55_bc43_021bd5c26297.slice. Jan 13 20:40:17.922801 kubelet[2682]: I0113 20:40:17.922380 2682 topology_manager.go:215] "Topology Admit Handler" podUID="353a00b4-f807-4712-80d0-a3f47f2ea44d" podNamespace="kube-system" podName="cilium-operator-599987898-fx856" Jan 13 20:40:17.931714 systemd[1]: Created slice kubepods-besteffort-pod353a00b4_f807_4712_80d0_a3f47f2ea44d.slice - libcontainer container kubepods-besteffort-pod353a00b4_f807_4712_80d0_a3f47f2ea44d.slice. Jan 13 20:40:18.000209 kubelet[2682]: I0113 20:40:18.000153 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-config-path\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000209 kubelet[2682]: I0113 20:40:18.000191 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-hubble-tls\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000209 kubelet[2682]: I0113 20:40:18.000215 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-run\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000209 kubelet[2682]: I0113 20:40:18.000231 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-hostproc\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000551 kubelet[2682]: I0113 20:40:18.000248 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-lib-modules\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000551 kubelet[2682]: I0113 20:40:18.000301 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jsc\" (UniqueName: \"kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-kube-api-access-99jsc\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000551 kubelet[2682]: I0113 20:40:18.000339 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-bpf-maps\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000551 kubelet[2682]: I0113 20:40:18.000361 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cni-path\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000551 kubelet[2682]: I0113 20:40:18.000385 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-etc-cni-netd\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000551 kubelet[2682]: I0113 20:40:18.000406 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-cgroup\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000738 kubelet[2682]: I0113 20:40:18.000427 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-xtables-lock\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000738 kubelet[2682]: I0113 20:40:18.000450 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-kernel\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000738 kubelet[2682]: I0113 20:40:18.000473 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a908f06-106f-4b55-bc43-021bd5c26297-clustermesh-secrets\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.000738 kubelet[2682]: I0113 20:40:18.000496 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-net\") pod \"cilium-wzw9c\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " pod="kube-system/cilium-wzw9c" Jan 13 20:40:18.101453 kubelet[2682]: I0113 20:40:18.101414 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/353a00b4-f807-4712-80d0-a3f47f2ea44d-cilium-config-path\") pod \"cilium-operator-599987898-fx856\" (UID: \"353a00b4-f807-4712-80d0-a3f47f2ea44d\") " pod="kube-system/cilium-operator-599987898-fx856" Jan 13 20:40:18.101453 kubelet[2682]: I0113 20:40:18.101446 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpz7\" (UniqueName: \"kubernetes.io/projected/353a00b4-f807-4712-80d0-a3f47f2ea44d-kube-api-access-mhpz7\") pod \"cilium-operator-599987898-fx856\" (UID: \"353a00b4-f807-4712-80d0-a3f47f2ea44d\") " pod="kube-system/cilium-operator-599987898-fx856" Jan 13 20:40:18.123572 kubelet[2682]: E0113 20:40:18.123539 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.124145 containerd[1494]: time="2025-01-13T20:40:18.124098401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-szt4m,Uid:9d299980-d1c1-4814-a29a-c2513c6bdeaa,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:18.130018 kubelet[2682]: E0113 20:40:18.129975 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.131032 containerd[1494]: time="2025-01-13T20:40:18.130458884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzw9c,Uid:4a908f06-106f-4b55-bc43-021bd5c26297,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:18.152873 containerd[1494]: time="2025-01-13T20:40:18.152433377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:18.152873 containerd[1494]: time="2025-01-13T20:40:18.152565624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:18.152873 containerd[1494]: time="2025-01-13T20:40:18.152590316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:18.152873 containerd[1494]: time="2025-01-13T20:40:18.152700176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:18.165926 containerd[1494]: time="2025-01-13T20:40:18.165775434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:18.166117 containerd[1494]: time="2025-01-13T20:40:18.166043887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:18.166117 containerd[1494]: time="2025-01-13T20:40:18.166070713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:18.167894 containerd[1494]: time="2025-01-13T20:40:18.166915143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:18.174052 systemd[1]: Started cri-containerd-2360fd5261461597a22ed52273e3ac1589ea9f5ddafc0ffeec403c7cee46b995.scope - libcontainer container 2360fd5261461597a22ed52273e3ac1589ea9f5ddafc0ffeec403c7cee46b995. Jan 13 20:40:18.184614 systemd[1]: Started cri-containerd-d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03.scope - libcontainer container d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03. Jan 13 20:40:18.215272 containerd[1494]: time="2025-01-13T20:40:18.215217935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-szt4m,Uid:9d299980-d1c1-4814-a29a-c2513c6bdeaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2360fd5261461597a22ed52273e3ac1589ea9f5ddafc0ffeec403c7cee46b995\"" Jan 13 20:40:18.216565 kubelet[2682]: E0113 20:40:18.216520 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.218903 containerd[1494]: time="2025-01-13T20:40:18.218839406Z" level=info msg="CreateContainer within sandbox \"2360fd5261461597a22ed52273e3ac1589ea9f5ddafc0ffeec403c7cee46b995\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:40:18.221070 containerd[1494]: time="2025-01-13T20:40:18.221039548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzw9c,Uid:4a908f06-106f-4b55-bc43-021bd5c26297,Namespace:kube-system,Attempt:0,} returns sandbox id \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\"" Jan 13 20:40:18.222700 kubelet[2682]: E0113 20:40:18.221707 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.222809 containerd[1494]: time="2025-01-13T20:40:18.222790910Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:40:18.234945 kubelet[2682]: E0113 20:40:18.234745 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.235533 containerd[1494]: time="2025-01-13T20:40:18.235487374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fx856,Uid:353a00b4-f807-4712-80d0-a3f47f2ea44d,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:18.240004 containerd[1494]: time="2025-01-13T20:40:18.239949798Z" level=info msg="CreateContainer within sandbox \"2360fd5261461597a22ed52273e3ac1589ea9f5ddafc0ffeec403c7cee46b995\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ac142d4d25ecf8c4b1194853f4e849daed8fa5a1711a998a88ff5e653f0b2aa\"" Jan 13 20:40:18.240505 containerd[1494]: time="2025-01-13T20:40:18.240480441Z" level=info msg="StartContainer for \"1ac142d4d25ecf8c4b1194853f4e849daed8fa5a1711a998a88ff5e653f0b2aa\"" Jan 13 20:40:18.268684 containerd[1494]: time="2025-01-13T20:40:18.268497480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:18.268684 containerd[1494]: time="2025-01-13T20:40:18.268567206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:18.268684 containerd[1494]: time="2025-01-13T20:40:18.268584002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:18.268930 containerd[1494]: time="2025-01-13T20:40:18.268695084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:18.274145 systemd[1]: Started cri-containerd-1ac142d4d25ecf8c4b1194853f4e849daed8fa5a1711a998a88ff5e653f0b2aa.scope - libcontainer container 1ac142d4d25ecf8c4b1194853f4e849daed8fa5a1711a998a88ff5e653f0b2aa. Jan 13 20:40:18.294158 systemd[1]: Started cri-containerd-8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9.scope - libcontainer container 8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9. Jan 13 20:40:18.320437 containerd[1494]: time="2025-01-13T20:40:18.320304417Z" level=info msg="StartContainer for \"1ac142d4d25ecf8c4b1194853f4e849daed8fa5a1711a998a88ff5e653f0b2aa\" returns successfully" Jan 13 20:40:18.339726 containerd[1494]: time="2025-01-13T20:40:18.339545009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fx856,Uid:353a00b4-f807-4712-80d0-a3f47f2ea44d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9\"" Jan 13 20:40:18.340913 kubelet[2682]: E0113 20:40:18.340556 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.777465 kubelet[2682]: E0113 20:40:18.777421 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.871976 kubelet[2682]: E0113 20:40:18.871905 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:18.882267 kubelet[2682]: I0113 20:40:18.882199 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-szt4m" podStartSLOduration=1.8821755900000001 podStartE2EDuration="1.88217559s" podCreationTimestamp="2025-01-13 20:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:18.881750249 +0000 UTC m=+16.130788799" watchObservedRunningTime="2025-01-13 20:40:18.88217559 +0000 UTC m=+16.131214140" Jan 13 20:40:29.299050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848733642.mount: Deactivated successfully. Jan 13 20:40:31.678132 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:51618.service - OpenSSH per-connection server daemon (10.0.0.1:51618). Jan 13 20:40:31.731001 sshd[3087]: Accepted publickey for core from 10.0.0.1 port 51618 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:31.733171 sshd-session[3087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:31.737647 systemd-logind[1481]: New session 10 of user core. Jan 13 20:40:31.746144 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:40:31.922584 sshd[3089]: Connection closed by 10.0.0.1 port 51618 Jan 13 20:40:31.923130 sshd-session[3087]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:31.926710 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:51618.service: Deactivated successfully. Jan 13 20:40:31.929570 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:40:31.931860 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:40:31.933248 systemd-logind[1481]: Removed session 10. Jan 13 20:40:33.386076 containerd[1494]: time="2025-01-13T20:40:33.385994383Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:33.386962 containerd[1494]: time="2025-01-13T20:40:33.386915687Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735311" Jan 13 20:40:33.388587 containerd[1494]: time="2025-01-13T20:40:33.388547139Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:33.391154 containerd[1494]: time="2025-01-13T20:40:33.391094024Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.168274354s" Jan 13 20:40:33.391154 containerd[1494]: time="2025-01-13T20:40:33.391140236Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:40:33.393273 containerd[1494]: time="2025-01-13T20:40:33.393070301Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:40:33.394350 containerd[1494]: time="2025-01-13T20:40:33.394319604Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:40:33.410704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740601364.mount: Deactivated successfully. Jan 13 20:40:33.414277 containerd[1494]: time="2025-01-13T20:40:33.414220225Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\"" Jan 13 20:40:33.414673 containerd[1494]: time="2025-01-13T20:40:33.414632585Z" level=info msg="StartContainer for \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\"" Jan 13 20:40:33.453161 systemd[1]: Started cri-containerd-643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf.scope - libcontainer container 643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf. Jan 13 20:40:33.526353 containerd[1494]: time="2025-01-13T20:40:33.526219954Z" level=info msg="StartContainer for \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\" returns successfully" Jan 13 20:40:33.543057 systemd[1]: cri-containerd-643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf.scope: Deactivated successfully. Jan 13 20:40:33.900484 kubelet[2682]: E0113 20:40:33.900439 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:34.040883 containerd[1494]: time="2025-01-13T20:40:34.038800142Z" level=info msg="shim disconnected" id=643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf namespace=k8s.io Jan 13 20:40:34.040883 containerd[1494]: time="2025-01-13T20:40:34.038871345Z" level=warning msg="cleaning up after shim disconnected" id=643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf namespace=k8s.io Jan 13 20:40:34.040883 containerd[1494]: time="2025-01-13T20:40:34.038880433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:34.407867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf-rootfs.mount: Deactivated successfully. Jan 13 20:40:34.905491 kubelet[2682]: E0113 20:40:34.904657 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:34.907536 containerd[1494]: time="2025-01-13T20:40:34.907491899Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:40:34.931791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884704267.mount: Deactivated successfully. Jan 13 20:40:34.934062 containerd[1494]: time="2025-01-13T20:40:34.934031813Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\"" Jan 13 20:40:34.935204 containerd[1494]: time="2025-01-13T20:40:34.935180672Z" level=info msg="StartContainer for \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\"" Jan 13 20:40:34.967434 systemd[1]: Started cri-containerd-98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798.scope - libcontainer container 98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798. Jan 13 20:40:35.012442 containerd[1494]: time="2025-01-13T20:40:35.012394736Z" level=info msg="StartContainer for \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\" returns successfully" Jan 13 20:40:35.017713 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:40:35.017986 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:35.018058 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:35.025315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:35.025564 systemd[1]: cri-containerd-98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798.scope: Deactivated successfully. Jan 13 20:40:35.064548 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:35.067015 containerd[1494]: time="2025-01-13T20:40:35.066935645Z" level=info msg="shim disconnected" id=98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798 namespace=k8s.io Jan 13 20:40:35.067015 containerd[1494]: time="2025-01-13T20:40:35.067007830Z" level=warning msg="cleaning up after shim disconnected" id=98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798 namespace=k8s.io Jan 13 20:40:35.067169 containerd[1494]: time="2025-01-13T20:40:35.067021738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:35.342652 containerd[1494]: time="2025-01-13T20:40:35.342600488Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:35.343608 containerd[1494]: time="2025-01-13T20:40:35.343557197Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907213" Jan 13 20:40:35.344798 containerd[1494]: time="2025-01-13T20:40:35.344753857Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:35.346067 containerd[1494]: time="2025-01-13T20:40:35.346027181Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.952920698s" Jan 13 20:40:35.346067 containerd[1494]: time="2025-01-13T20:40:35.346056300Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:40:35.347997 containerd[1494]: time="2025-01-13T20:40:35.347959887Z" level=info msg="CreateContainer within sandbox \"8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:40:35.363475 containerd[1494]: time="2025-01-13T20:40:35.363420609Z" level=info msg="CreateContainer within sandbox \"8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\"" Jan 13 20:40:35.364806 containerd[1494]: time="2025-01-13T20:40:35.363953859Z" level=info msg="StartContainer for \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\"" Jan 13 20:40:35.394998 systemd[1]: Started cri-containerd-055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276.scope - libcontainer container 055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276. Jan 13 20:40:35.409038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798-rootfs.mount: Deactivated successfully. Jan 13 20:40:35.430340 containerd[1494]: time="2025-01-13T20:40:35.430292822Z" level=info msg="StartContainer for \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\" returns successfully" Jan 13 20:40:35.910463 kubelet[2682]: E0113 20:40:35.910392 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:35.914634 kubelet[2682]: E0113 20:40:35.914572 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:35.919655 containerd[1494]: time="2025-01-13T20:40:35.919594681Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:40:36.062929 kubelet[2682]: I0113 20:40:36.060786 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-fx856" podStartSLOduration=2.055367723 podStartE2EDuration="19.060754321s" podCreationTimestamp="2025-01-13 20:40:17 +0000 UTC" firstStartedPulling="2025-01-13 20:40:18.341270757 +0000 UTC m=+15.590309307" lastFinishedPulling="2025-01-13 20:40:35.346657365 +0000 UTC m=+32.595695905" observedRunningTime="2025-01-13 20:40:36.060741326 +0000 UTC m=+33.309779906" watchObservedRunningTime="2025-01-13 20:40:36.060754321 +0000 UTC m=+33.309792891" Jan 13 20:40:36.185134 containerd[1494]: time="2025-01-13T20:40:36.183763627Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\"" Jan 13 20:40:36.185134 containerd[1494]: time="2025-01-13T20:40:36.184385954Z" level=info msg="StartContainer for \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\"" Jan 13 20:40:36.248033 systemd[1]: Started cri-containerd-0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e.scope - libcontainer container 0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e. Jan 13 20:40:36.293133 systemd[1]: cri-containerd-0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e.scope: Deactivated successfully. Jan 13 20:40:36.334153 containerd[1494]: time="2025-01-13T20:40:36.334062613Z" level=info msg="StartContainer for \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\" returns successfully" Jan 13 20:40:36.408645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e-rootfs.mount: Deactivated successfully. Jan 13 20:40:36.918250 kubelet[2682]: E0113 20:40:36.918139 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:36.918250 kubelet[2682]: E0113 20:40:36.918203 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:36.935916 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:38120.service - OpenSSH per-connection server daemon (10.0.0.1:38120). Jan 13 20:40:37.008098 sshd[3334]: Accepted publickey for core from 10.0.0.1 port 38120 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:37.010081 sshd-session[3334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:37.014376 systemd-logind[1481]: New session 11 of user core. Jan 13 20:40:37.026014 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:40:37.060686 containerd[1494]: time="2025-01-13T20:40:37.060623072Z" level=info msg="shim disconnected" id=0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e namespace=k8s.io Jan 13 20:40:37.060686 containerd[1494]: time="2025-01-13T20:40:37.060673123Z" level=warning msg="cleaning up after shim disconnected" id=0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e namespace=k8s.io Jan 13 20:40:37.060686 containerd[1494]: time="2025-01-13T20:40:37.060681820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:37.207398 sshd[3336]: Connection closed by 10.0.0.1 port 38120 Jan 13 20:40:37.207899 sshd-session[3334]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:37.213328 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:38120.service: Deactivated successfully. Jan 13 20:40:37.216286 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:40:37.217321 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:40:37.218223 systemd-logind[1481]: Removed session 11. Jan 13 20:40:37.926486 kubelet[2682]: E0113 20:40:37.924866 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:37.929400 containerd[1494]: time="2025-01-13T20:40:37.929337607Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:40:38.331738 containerd[1494]: time="2025-01-13T20:40:38.331493960Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\"" Jan 13 20:40:38.332542 containerd[1494]: time="2025-01-13T20:40:38.332503657Z" level=info msg="StartContainer for \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\"" Jan 13 20:40:38.379226 systemd[1]: Started cri-containerd-55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880.scope - libcontainer container 55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880. Jan 13 20:40:38.416897 systemd[1]: cri-containerd-55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880.scope: Deactivated successfully. Jan 13 20:40:38.521390 containerd[1494]: time="2025-01-13T20:40:38.521315803Z" level=info msg="StartContainer for \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\" returns successfully" Jan 13 20:40:38.546061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880-rootfs.mount: Deactivated successfully. Jan 13 20:40:38.698035 containerd[1494]: time="2025-01-13T20:40:38.697947736Z" level=info msg="shim disconnected" id=55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880 namespace=k8s.io Jan 13 20:40:38.698035 containerd[1494]: time="2025-01-13T20:40:38.698014079Z" level=warning msg="cleaning up after shim disconnected" id=55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880 namespace=k8s.io Jan 13 20:40:38.698035 containerd[1494]: time="2025-01-13T20:40:38.698024159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:38.930270 kubelet[2682]: E0113 20:40:38.930231 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:38.932798 containerd[1494]: time="2025-01-13T20:40:38.932734562Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:40:38.967154 containerd[1494]: time="2025-01-13T20:40:38.966992675Z" level=info msg="CreateContainer within sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\"" Jan 13 20:40:38.967577 containerd[1494]: time="2025-01-13T20:40:38.967519377Z" level=info msg="StartContainer for \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\"" Jan 13 20:40:39.039102 systemd[1]: Started cri-containerd-da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254.scope - libcontainer container da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254. Jan 13 20:40:39.085361 containerd[1494]: time="2025-01-13T20:40:39.085288013Z" level=info msg="StartContainer for \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\" returns successfully" Jan 13 20:40:39.272937 kubelet[2682]: I0113 20:40:39.272752 2682 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:40:39.378226 kubelet[2682]: I0113 20:40:39.378149 2682 topology_manager.go:215] "Topology Admit Handler" podUID="0ee16738-1a7c-4fa8-94fe-4fee9591a549" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lnzqn" Jan 13 20:40:39.381517 kubelet[2682]: I0113 20:40:39.381132 2682 topology_manager.go:215] "Topology Admit Handler" podUID="8c1d1815-d87f-4286-8cd5-4af9980c544f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q48h4" Jan 13 20:40:39.390501 systemd[1]: Created slice kubepods-burstable-pod0ee16738_1a7c_4fa8_94fe_4fee9591a549.slice - libcontainer container kubepods-burstable-pod0ee16738_1a7c_4fa8_94fe_4fee9591a549.slice. Jan 13 20:40:39.401722 systemd[1]: Created slice kubepods-burstable-pod8c1d1815_d87f_4286_8cd5_4af9980c544f.slice - libcontainer container kubepods-burstable-pod8c1d1815_d87f_4286_8cd5_4af9980c544f.slice. Jan 13 20:40:39.549548 kubelet[2682]: I0113 20:40:39.549383 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbff\" (UniqueName: \"kubernetes.io/projected/8c1d1815-d87f-4286-8cd5-4af9980c544f-kube-api-access-gsbff\") pod \"coredns-7db6d8ff4d-q48h4\" (UID: \"8c1d1815-d87f-4286-8cd5-4af9980c544f\") " pod="kube-system/coredns-7db6d8ff4d-q48h4" Jan 13 20:40:39.549548 kubelet[2682]: I0113 20:40:39.549440 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ee16738-1a7c-4fa8-94fe-4fee9591a549-config-volume\") pod \"coredns-7db6d8ff4d-lnzqn\" (UID: \"0ee16738-1a7c-4fa8-94fe-4fee9591a549\") " pod="kube-system/coredns-7db6d8ff4d-lnzqn" Jan 13 20:40:39.549548 kubelet[2682]: I0113 20:40:39.549468 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1d1815-d87f-4286-8cd5-4af9980c544f-config-volume\") pod \"coredns-7db6d8ff4d-q48h4\" (UID: \"8c1d1815-d87f-4286-8cd5-4af9980c544f\") " pod="kube-system/coredns-7db6d8ff4d-q48h4" Jan 13 20:40:39.549548 kubelet[2682]: I0113 20:40:39.549486 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmv58\" (UniqueName: \"kubernetes.io/projected/0ee16738-1a7c-4fa8-94fe-4fee9591a549-kube-api-access-fmv58\") pod \"coredns-7db6d8ff4d-lnzqn\" (UID: \"0ee16738-1a7c-4fa8-94fe-4fee9591a549\") " pod="kube-system/coredns-7db6d8ff4d-lnzqn" Jan 13 20:40:39.698910 kubelet[2682]: E0113 20:40:39.698854 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:39.699755 containerd[1494]: time="2025-01-13T20:40:39.699663134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lnzqn,Uid:0ee16738-1a7c-4fa8-94fe-4fee9591a549,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:39.705271 kubelet[2682]: E0113 20:40:39.704993 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:39.705617 containerd[1494]: time="2025-01-13T20:40:39.705570131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q48h4,Uid:8c1d1815-d87f-4286-8cd5-4af9980c544f,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:39.936264 kubelet[2682]: E0113 20:40:39.936222 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:39.997262 kubelet[2682]: I0113 20:40:39.997188 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wzw9c" podStartSLOduration=7.826790694 podStartE2EDuration="22.997163999s" podCreationTimestamp="2025-01-13 20:40:17 +0000 UTC" firstStartedPulling="2025-01-13 20:40:18.222259726 +0000 UTC m=+15.471298276" lastFinishedPulling="2025-01-13 20:40:33.392633031 +0000 UTC m=+30.641671581" observedRunningTime="2025-01-13 20:40:39.996313101 +0000 UTC m=+37.245351651" watchObservedRunningTime="2025-01-13 20:40:39.997163999 +0000 UTC m=+37.246202559" Jan 13 20:40:40.937912 kubelet[2682]: E0113 20:40:40.937827 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:41.408461 systemd-networkd[1414]: cilium_host: Link UP Jan 13 20:40:41.408648 systemd-networkd[1414]: cilium_net: Link UP Jan 13 20:40:41.410234 systemd-networkd[1414]: cilium_net: Gained carrier Jan 13 20:40:41.410545 systemd-networkd[1414]: cilium_host: Gained carrier Jan 13 20:40:41.524305 systemd-networkd[1414]: cilium_vxlan: Link UP Jan 13 20:40:41.524315 systemd-networkd[1414]: cilium_vxlan: Gained carrier Jan 13 20:40:41.592052 systemd-networkd[1414]: cilium_host: Gained IPv6LL Jan 13 20:40:41.656075 systemd-networkd[1414]: cilium_net: Gained IPv6LL Jan 13 20:40:41.749946 kernel: NET: Registered PF_ALG protocol family Jan 13 20:40:41.940880 kubelet[2682]: E0113 20:40:41.940069 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:42.222457 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:38130.service - OpenSSH per-connection server daemon (10.0.0.1:38130). Jan 13 20:40:42.280775 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 38130 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:42.282906 sshd-session[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:42.287944 systemd-logind[1481]: New session 12 of user core. Jan 13 20:40:42.293089 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:40:42.413191 sshd[3796]: Connection closed by 10.0.0.1 port 38130 Jan 13 20:40:42.413548 sshd-session[3770]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:42.418814 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:38130.service: Deactivated successfully. Jan 13 20:40:42.421003 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:40:42.421923 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:40:42.423001 systemd-logind[1481]: Removed session 12. Jan 13 20:40:42.513033 systemd-networkd[1414]: lxc_health: Link UP Jan 13 20:40:42.520215 systemd-networkd[1414]: lxc_health: Gained carrier Jan 13 20:40:42.781013 systemd-networkd[1414]: lxcccabffcc1e66: Link UP Jan 13 20:40:42.794131 kernel: eth0: renamed from tmp5f31a Jan 13 20:40:42.800508 systemd-networkd[1414]: lxcccabffcc1e66: Gained carrier Jan 13 20:40:42.803388 systemd-networkd[1414]: lxc30e8890e9bab: Link UP Jan 13 20:40:42.813894 kernel: eth0: renamed from tmp72426 Jan 13 20:40:42.817857 systemd-networkd[1414]: lxc30e8890e9bab: Gained carrier Jan 13 20:40:43.344039 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Jan 13 20:40:44.134632 kubelet[2682]: E0113 20:40:44.134444 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:44.240277 systemd-networkd[1414]: lxcccabffcc1e66: Gained IPv6LL Jan 13 20:40:44.368076 systemd-networkd[1414]: lxc_health: Gained IPv6LL Jan 13 20:40:44.817036 systemd-networkd[1414]: lxc30e8890e9bab: Gained IPv6LL Jan 13 20:40:44.946670 kubelet[2682]: E0113 20:40:44.946631 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:45.948335 kubelet[2682]: E0113 20:40:45.948277 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:46.376238 containerd[1494]: time="2025-01-13T20:40:46.376093557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:46.376238 containerd[1494]: time="2025-01-13T20:40:46.376184377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:46.376238 containerd[1494]: time="2025-01-13T20:40:46.376204066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:46.376825 containerd[1494]: time="2025-01-13T20:40:46.376528329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:46.376825 containerd[1494]: time="2025-01-13T20:40:46.376622315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:46.376825 containerd[1494]: time="2025-01-13T20:40:46.376639409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:46.376825 containerd[1494]: time="2025-01-13T20:40:46.376748816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:46.377207 containerd[1494]: time="2025-01-13T20:40:46.377076967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:46.412077 systemd[1]: Started cri-containerd-5f31a09a8aeba5792ebeb585fa5dacbfd7bbe8dd13eef7da90add0ecae926e87.scope - libcontainer container 5f31a09a8aeba5792ebeb585fa5dacbfd7bbe8dd13eef7da90add0ecae926e87. Jan 13 20:40:46.415012 systemd[1]: Started cri-containerd-72426ce5a190b00197043b6bdf888d3fba9923ee2888e5b5407077d2a28dcf3f.scope - libcontainer container 72426ce5a190b00197043b6bdf888d3fba9923ee2888e5b5407077d2a28dcf3f. Jan 13 20:40:46.428975 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:40:46.432407 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:40:46.466446 containerd[1494]: time="2025-01-13T20:40:46.466404374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q48h4,Uid:8c1d1815-d87f-4286-8cd5-4af9980c544f,Namespace:kube-system,Attempt:0,} returns sandbox id \"72426ce5a190b00197043b6bdf888d3fba9923ee2888e5b5407077d2a28dcf3f\"" Jan 13 20:40:46.466587 containerd[1494]: time="2025-01-13T20:40:46.466452399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lnzqn,Uid:0ee16738-1a7c-4fa8-94fe-4fee9591a549,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f31a09a8aeba5792ebeb585fa5dacbfd7bbe8dd13eef7da90add0ecae926e87\"" Jan 13 20:40:46.468195 kubelet[2682]: E0113 20:40:46.467224 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:46.468195 kubelet[2682]: E0113 20:40:46.467299 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:46.469572 containerd[1494]: time="2025-01-13T20:40:46.469530753Z" level=info msg="CreateContainer within sandbox \"5f31a09a8aeba5792ebeb585fa5dacbfd7bbe8dd13eef7da90add0ecae926e87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:40:46.469749 containerd[1494]: time="2025-01-13T20:40:46.469715289Z" level=info msg="CreateContainer within sandbox \"72426ce5a190b00197043b6bdf888d3fba9923ee2888e5b5407077d2a28dcf3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:40:46.493234 containerd[1494]: time="2025-01-13T20:40:46.493163692Z" level=info msg="CreateContainer within sandbox \"5f31a09a8aeba5792ebeb585fa5dacbfd7bbe8dd13eef7da90add0ecae926e87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7a0d3704db4d7e84d50c284fe5e1d21c09a32236b941c18ced7f1cef5504afd\"" Jan 13 20:40:46.494002 containerd[1494]: time="2025-01-13T20:40:46.493799202Z" level=info msg="StartContainer for \"c7a0d3704db4d7e84d50c284fe5e1d21c09a32236b941c18ced7f1cef5504afd\"" Jan 13 20:40:46.499094 containerd[1494]: time="2025-01-13T20:40:46.498937039Z" level=info msg="CreateContainer within sandbox \"72426ce5a190b00197043b6bdf888d3fba9923ee2888e5b5407077d2a28dcf3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1b066ae967924e3c234d327d986369c088c864a8f5c53a459c7a9791a4151c8\"" Jan 13 20:40:46.500995 containerd[1494]: time="2025-01-13T20:40:46.499596366Z" level=info msg="StartContainer for \"b1b066ae967924e3c234d327d986369c088c864a8f5c53a459c7a9791a4151c8\"" Jan 13 20:40:46.533006 systemd[1]: Started cri-containerd-c7a0d3704db4d7e84d50c284fe5e1d21c09a32236b941c18ced7f1cef5504afd.scope - libcontainer container c7a0d3704db4d7e84d50c284fe5e1d21c09a32236b941c18ced7f1cef5504afd. Jan 13 20:40:46.537748 systemd[1]: Started cri-containerd-b1b066ae967924e3c234d327d986369c088c864a8f5c53a459c7a9791a4151c8.scope - libcontainer container b1b066ae967924e3c234d327d986369c088c864a8f5c53a459c7a9791a4151c8. Jan 13 20:40:46.568312 containerd[1494]: time="2025-01-13T20:40:46.568250363Z" level=info msg="StartContainer for \"c7a0d3704db4d7e84d50c284fe5e1d21c09a32236b941c18ced7f1cef5504afd\" returns successfully" Jan 13 20:40:46.579525 containerd[1494]: time="2025-01-13T20:40:46.579437204Z" level=info msg="StartContainer for \"b1b066ae967924e3c234d327d986369c088c864a8f5c53a459c7a9791a4151c8\" returns successfully" Jan 13 20:40:46.951004 kubelet[2682]: E0113 20:40:46.950918 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:46.954282 kubelet[2682]: E0113 20:40:46.954254 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:46.963045 kubelet[2682]: I0113 20:40:46.962962 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lnzqn" podStartSLOduration=29.962937906 podStartE2EDuration="29.962937906s" podCreationTimestamp="2025-01-13 20:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:46.962258328 +0000 UTC m=+44.211296878" watchObservedRunningTime="2025-01-13 20:40:46.962937906 +0000 UTC m=+44.211976456" Jan 13 20:40:47.169683 kubelet[2682]: I0113 20:40:47.169219 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-q48h4" podStartSLOduration=30.169195386 podStartE2EDuration="30.169195386s" podCreationTimestamp="2025-01-13 20:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:47.167690054 +0000 UTC m=+44.416728604" watchObservedRunningTime="2025-01-13 20:40:47.169195386 +0000 UTC m=+44.418233936" Jan 13 20:40:47.430797 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:33788.service - OpenSSH per-connection server daemon (10.0.0.1:33788). Jan 13 20:40:47.487586 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 33788 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:47.489912 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:47.494804 systemd-logind[1481]: New session 13 of user core. Jan 13 20:40:47.501224 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:40:47.633679 sshd[4132]: Connection closed by 10.0.0.1 port 33788 Jan 13 20:40:47.634155 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:47.638965 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:33788.service: Deactivated successfully. Jan 13 20:40:47.641668 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:40:47.642716 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:40:47.643988 systemd-logind[1481]: Removed session 13. Jan 13 20:40:47.955396 kubelet[2682]: E0113 20:40:47.955351 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:47.956011 kubelet[2682]: E0113 20:40:47.955635 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:48.957562 kubelet[2682]: E0113 20:40:48.957520 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:52.653287 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:33792.service - OpenSSH per-connection server daemon (10.0.0.1:33792). Jan 13 20:40:52.695119 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 33792 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:52.696657 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:52.700871 systemd-logind[1481]: New session 14 of user core. Jan 13 20:40:52.711969 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:40:52.845603 sshd[4152]: Connection closed by 10.0.0.1 port 33792 Jan 13 20:40:52.846003 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:52.862171 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:33792.service: Deactivated successfully. Jan 13 20:40:52.865563 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:40:52.869068 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:40:52.878366 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:33808.service - OpenSSH per-connection server daemon (10.0.0.1:33808). Jan 13 20:40:52.879755 systemd-logind[1481]: Removed session 14. Jan 13 20:40:52.925663 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 33808 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:52.928143 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:52.932806 systemd-logind[1481]: New session 15 of user core. Jan 13 20:40:52.938982 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:40:53.088243 sshd[4167]: Connection closed by 10.0.0.1 port 33808 Jan 13 20:40:53.088680 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:53.103532 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:33808.service: Deactivated successfully. Jan 13 20:40:53.106098 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:40:53.109446 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:40:53.117114 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:33822.service - OpenSSH per-connection server daemon (10.0.0.1:33822). Jan 13 20:40:53.118150 systemd-logind[1481]: Removed session 15. Jan 13 20:40:53.156457 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 33822 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:53.158213 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:53.162785 systemd-logind[1481]: New session 16 of user core. Jan 13 20:40:53.171014 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:40:53.285249 sshd[4179]: Connection closed by 10.0.0.1 port 33822 Jan 13 20:40:53.285559 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:53.289542 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:33822.service: Deactivated successfully. Jan 13 20:40:53.291674 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:40:53.292337 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:40:53.293187 systemd-logind[1481]: Removed session 16. Jan 13 20:40:58.322545 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:58480.service - OpenSSH per-connection server daemon (10.0.0.1:58480). Jan 13 20:40:58.382117 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 58480 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:58.384444 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:58.392689 systemd-logind[1481]: New session 17 of user core. Jan 13 20:40:58.410250 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:40:58.605578 sshd[4194]: Connection closed by 10.0.0.1 port 58480 Jan 13 20:40:58.604590 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:58.611413 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:58480.service: Deactivated successfully. Jan 13 20:40:58.618458 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:40:58.621172 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:40:58.623875 systemd-logind[1481]: Removed session 17. Jan 13 20:41:03.631837 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:58490.service - OpenSSH per-connection server daemon (10.0.0.1:58490). Jan 13 20:41:03.702793 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:03.705152 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:03.717063 systemd-logind[1481]: New session 18 of user core. Jan 13 20:41:03.731316 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:41:03.923310 sshd[4211]: Connection closed by 10.0.0.1 port 58490 Jan 13 20:41:03.924114 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:03.938134 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:58490.service: Deactivated successfully. Jan 13 20:41:03.949198 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:41:03.950608 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:41:03.952039 systemd-logind[1481]: Removed session 18. Jan 13 20:41:08.945253 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:45418.service - OpenSSH per-connection server daemon (10.0.0.1:45418). Jan 13 20:41:08.982034 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 45418 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:08.983576 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:08.987421 systemd-logind[1481]: New session 19 of user core. Jan 13 20:41:08.998025 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:41:09.103053 sshd[4225]: Connection closed by 10.0.0.1 port 45418 Jan 13 20:41:09.103471 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:09.119580 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:45418.service: Deactivated successfully. Jan 13 20:41:09.121413 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:41:09.122818 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:41:09.129175 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:45424.service - OpenSSH per-connection server daemon (10.0.0.1:45424). Jan 13 20:41:09.130149 systemd-logind[1481]: Removed session 19. Jan 13 20:41:09.165580 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 45424 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:09.167040 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:09.170794 systemd-logind[1481]: New session 20 of user core. Jan 13 20:41:09.180966 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:41:09.598307 sshd[4240]: Connection closed by 10.0.0.1 port 45424 Jan 13 20:41:09.598909 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:09.608792 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:45424.service: Deactivated successfully. Jan 13 20:41:09.610823 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:41:09.612458 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:41:09.613779 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:45426.service - OpenSSH per-connection server daemon (10.0.0.1:45426). Jan 13 20:41:09.614673 systemd-logind[1481]: Removed session 20. Jan 13 20:41:09.660528 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 45426 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:09.662221 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:09.666394 systemd-logind[1481]: New session 21 of user core. Jan 13 20:41:09.673966 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:41:11.188730 sshd[4252]: Connection closed by 10.0.0.1 port 45426 Jan 13 20:41:11.191340 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:11.200873 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:45426.service: Deactivated successfully. Jan 13 20:41:11.207589 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:41:11.211913 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:41:11.227282 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:45432.service - OpenSSH per-connection server daemon (10.0.0.1:45432). Jan 13 20:41:11.228761 systemd-logind[1481]: Removed session 21. Jan 13 20:41:11.264283 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 45432 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:11.266109 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:11.270940 systemd-logind[1481]: New session 22 of user core. Jan 13 20:41:11.281124 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:41:11.674014 sshd[4272]: Connection closed by 10.0.0.1 port 45432 Jan 13 20:41:11.674570 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:11.685883 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:45432.service: Deactivated successfully. Jan 13 20:41:11.687880 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:41:11.689373 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:41:11.696195 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:45438.service - OpenSSH per-connection server daemon (10.0.0.1:45438). Jan 13 20:41:11.697120 systemd-logind[1481]: Removed session 22. Jan 13 20:41:11.735059 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 45438 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:11.736943 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:11.741873 systemd-logind[1481]: New session 23 of user core. Jan 13 20:41:11.751032 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:41:11.878473 sshd[4285]: Connection closed by 10.0.0.1 port 45438 Jan 13 20:41:11.878871 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:11.883512 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:45438.service: Deactivated successfully. Jan 13 20:41:11.885723 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:41:11.886573 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:41:11.887595 systemd-logind[1481]: Removed session 23. Jan 13 20:41:16.889753 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:42024.service - OpenSSH per-connection server daemon (10.0.0.1:42024). Jan 13 20:41:16.938666 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 42024 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:16.940804 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:16.945467 systemd-logind[1481]: New session 24 of user core. Jan 13 20:41:16.954002 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:41:17.064153 sshd[4300]: Connection closed by 10.0.0.1 port 42024 Jan 13 20:41:17.064529 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:17.069049 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:42024.service: Deactivated successfully. Jan 13 20:41:17.071300 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:41:17.071935 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:41:17.072887 systemd-logind[1481]: Removed session 24. Jan 13 20:41:17.834190 kubelet[2682]: E0113 20:41:17.834130 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:22.076101 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:42032.service - OpenSSH per-connection server daemon (10.0.0.1:42032). Jan 13 20:41:22.118485 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 42032 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:22.119892 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:22.124107 systemd-logind[1481]: New session 25 of user core. Jan 13 20:41:22.130000 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:41:22.235186 sshd[4319]: Connection closed by 10.0.0.1 port 42032 Jan 13 20:41:22.235537 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:22.239251 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:42032.service: Deactivated successfully. Jan 13 20:41:22.241170 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:41:22.241758 systemd-logind[1481]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:41:22.242634 systemd-logind[1481]: Removed session 25. Jan 13 20:41:27.258354 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). Jan 13 20:41:27.296943 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:27.298635 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:27.302717 systemd-logind[1481]: New session 26 of user core. Jan 13 20:41:27.308010 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:41:27.417702 sshd[4333]: Connection closed by 10.0.0.1 port 38450 Jan 13 20:41:27.418114 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:27.422785 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:38450.service: Deactivated successfully. Jan 13 20:41:27.424999 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:41:27.425609 systemd-logind[1481]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:41:27.426461 systemd-logind[1481]: Removed session 26. Jan 13 20:41:32.431241 systemd[1]: Started sshd@26-10.0.0.97:22-10.0.0.1:38462.service - OpenSSH per-connection server daemon (10.0.0.1:38462). Jan 13 20:41:32.491871 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 38462 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:32.493649 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:32.499517 systemd-logind[1481]: New session 27 of user core. Jan 13 20:41:32.508093 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:41:32.628295 sshd[4347]: Connection closed by 10.0.0.1 port 38462 Jan 13 20:41:32.628708 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:32.642693 systemd[1]: sshd@26-10.0.0.97:22-10.0.0.1:38462.service: Deactivated successfully. Jan 13 20:41:32.645457 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:41:32.647368 systemd-logind[1481]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:41:32.653293 systemd[1]: Started sshd@27-10.0.0.97:22-10.0.0.1:38464.service - OpenSSH per-connection server daemon (10.0.0.1:38464). Jan 13 20:41:32.654530 systemd-logind[1481]: Removed session 27. Jan 13 20:41:32.692412 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 38464 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:32.694124 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:32.698684 systemd-logind[1481]: New session 28 of user core. Jan 13 20:41:32.708004 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:41:34.098525 containerd[1494]: time="2025-01-13T20:41:34.098468596Z" level=info msg="StopContainer for \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\" with timeout 30 (s)" Jan 13 20:41:34.107111 containerd[1494]: time="2025-01-13T20:41:34.107076496Z" level=info msg="Stop container \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\" with signal terminated" Jan 13 20:41:34.122546 systemd[1]: cri-containerd-055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276.scope: Deactivated successfully. Jan 13 20:41:34.142033 containerd[1494]: time="2025-01-13T20:41:34.141933271Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:41:34.146979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276-rootfs.mount: Deactivated successfully. Jan 13 20:41:34.158041 containerd[1494]: time="2025-01-13T20:41:34.157884425Z" level=info msg="StopContainer for \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\" with timeout 2 (s)" Jan 13 20:41:34.158305 containerd[1494]: time="2025-01-13T20:41:34.158054970Z" level=info msg="shim disconnected" id=055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276 namespace=k8s.io Jan 13 20:41:34.158305 containerd[1494]: time="2025-01-13T20:41:34.158113682Z" level=warning msg="cleaning up after shim disconnected" id=055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276 namespace=k8s.io Jan 13 20:41:34.158305 containerd[1494]: time="2025-01-13T20:41:34.158123761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:34.158305 containerd[1494]: time="2025-01-13T20:41:34.158177523Z" level=info msg="Stop container \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\" with signal terminated" Jan 13 20:41:34.164974 systemd-networkd[1414]: lxc_health: Link DOWN Jan 13 20:41:34.165675 systemd-networkd[1414]: lxc_health: Lost carrier Jan 13 20:41:34.184995 containerd[1494]: time="2025-01-13T20:41:34.184928724Z" level=info msg="StopContainer for \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\" returns successfully" Jan 13 20:41:34.189621 containerd[1494]: time="2025-01-13T20:41:34.189578207Z" level=info msg="StopPodSandbox for \"8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9\"" Jan 13 20:41:34.193946 systemd[1]: cri-containerd-da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254.scope: Deactivated successfully. Jan 13 20:41:34.194352 systemd[1]: cri-containerd-da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254.scope: Consumed 7.479s CPU time. Jan 13 20:41:34.197997 containerd[1494]: time="2025-01-13T20:41:34.189624174Z" level=info msg="Container to stop \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.198540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9-shm.mount: Deactivated successfully. Jan 13 20:41:34.210452 systemd[1]: cri-containerd-8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9.scope: Deactivated successfully. Jan 13 20:41:34.221098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254-rootfs.mount: Deactivated successfully. Jan 13 20:41:34.228976 containerd[1494]: time="2025-01-13T20:41:34.228909372Z" level=info msg="shim disconnected" id=da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254 namespace=k8s.io Jan 13 20:41:34.228976 containerd[1494]: time="2025-01-13T20:41:34.228966320Z" level=warning msg="cleaning up after shim disconnected" id=da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254 namespace=k8s.io Jan 13 20:41:34.228976 containerd[1494]: time="2025-01-13T20:41:34.228975408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:34.240233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9-rootfs.mount: Deactivated successfully. Jan 13 20:41:34.246422 containerd[1494]: time="2025-01-13T20:41:34.246201169Z" level=info msg="shim disconnected" id=8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9 namespace=k8s.io Jan 13 20:41:34.246422 containerd[1494]: time="2025-01-13T20:41:34.246263447Z" level=warning msg="cleaning up after shim disconnected" id=8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9 namespace=k8s.io Jan 13 20:41:34.246422 containerd[1494]: time="2025-01-13T20:41:34.246274198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:34.250073 containerd[1494]: time="2025-01-13T20:41:34.250003149Z" level=info msg="StopContainer for \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\" returns successfully" Jan 13 20:41:34.250646 containerd[1494]: time="2025-01-13T20:41:34.250620794Z" level=info msg="StopPodSandbox for \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\"" Jan 13 20:41:34.250802 containerd[1494]: time="2025-01-13T20:41:34.250685578Z" level=info msg="Container to stop \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.250802 containerd[1494]: time="2025-01-13T20:41:34.250799826Z" level=info msg="Container to stop \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.250936 containerd[1494]: time="2025-01-13T20:41:34.250810786Z" level=info msg="Container to stop \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.250936 containerd[1494]: time="2025-01-13T20:41:34.250820895Z" level=info msg="Container to stop \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.250936 containerd[1494]: time="2025-01-13T20:41:34.250830674Z" level=info msg="Container to stop \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.258338 systemd[1]: cri-containerd-d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03.scope: Deactivated successfully. Jan 13 20:41:34.279746 containerd[1494]: time="2025-01-13T20:41:34.279682023Z" level=info msg="TearDown network for sandbox \"8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9\" successfully" Jan 13 20:41:34.279746 containerd[1494]: time="2025-01-13T20:41:34.279735565Z" level=info msg="StopPodSandbox for \"8841705b34dbe218024df0ac722ca99ff575108984f9dc0821de220b397d2aa9\" returns successfully" Jan 13 20:41:34.295127 containerd[1494]: time="2025-01-13T20:41:34.295041350Z" level=info msg="shim disconnected" id=d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03 namespace=k8s.io Jan 13 20:41:34.295127 containerd[1494]: time="2025-01-13T20:41:34.295118718Z" level=warning msg="cleaning up after shim disconnected" id=d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03 namespace=k8s.io Jan 13 20:41:34.295127 containerd[1494]: time="2025-01-13T20:41:34.295131111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:34.315922 containerd[1494]: time="2025-01-13T20:41:34.315815830Z" level=info msg="TearDown network for sandbox \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" successfully" Jan 13 20:41:34.315922 containerd[1494]: time="2025-01-13T20:41:34.315905119Z" level=info msg="StopPodSandbox for \"d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03\" returns successfully" Jan 13 20:41:34.431918 kubelet[2682]: I0113 20:41:34.431701 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jsc\" (UniqueName: \"kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-kube-api-access-99jsc\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.431918 kubelet[2682]: I0113 20:41:34.431764 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-xtables-lock\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.431918 kubelet[2682]: I0113 20:41:34.431782 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-etc-cni-netd\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.431918 kubelet[2682]: I0113 20:41:34.431798 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-hubble-tls\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.431918 kubelet[2682]: I0113 20:41:34.431816 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-cgroup\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.431918 kubelet[2682]: I0113 20:41:34.431837 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-config-path\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.434662 kubelet[2682]: I0113 20:41:34.431874 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-bpf-maps\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.434662 kubelet[2682]: I0113 20:41:34.431887 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-lib-modules\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.434662 kubelet[2682]: I0113 20:41:34.431907 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-net\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.434662 kubelet[2682]: I0113 20:41:34.431928 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/353a00b4-f807-4712-80d0-a3f47f2ea44d-cilium-config-path\") pod \"353a00b4-f807-4712-80d0-a3f47f2ea44d\" (UID: \"353a00b4-f807-4712-80d0-a3f47f2ea44d\") " Jan 13 20:41:34.434662 kubelet[2682]: I0113 20:41:34.431943 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhpz7\" (UniqueName: \"kubernetes.io/projected/353a00b4-f807-4712-80d0-a3f47f2ea44d-kube-api-access-mhpz7\") pod \"353a00b4-f807-4712-80d0-a3f47f2ea44d\" (UID: \"353a00b4-f807-4712-80d0-a3f47f2ea44d\") " Jan 13 20:41:34.434662 kubelet[2682]: I0113 20:41:34.431959 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cni-path\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.434894 kubelet[2682]: I0113 20:41:34.431959 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.434894 kubelet[2682]: I0113 20:41:34.431997 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.434894 kubelet[2682]: I0113 20:41:34.431974 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-kernel\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.434894 kubelet[2682]: I0113 20:41:34.432049 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a908f06-106f-4b55-bc43-021bd5c26297-clustermesh-secrets\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.434894 kubelet[2682]: I0113 20:41:34.432075 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-run\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.435070 kubelet[2682]: I0113 20:41:34.432096 2682 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-hostproc\") pod \"4a908f06-106f-4b55-bc43-021bd5c26297\" (UID: \"4a908f06-106f-4b55-bc43-021bd5c26297\") " Jan 13 20:41:34.435070 kubelet[2682]: I0113 20:41:34.432136 2682 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.435070 kubelet[2682]: I0113 20:41:34.432149 2682 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.435070 kubelet[2682]: I0113 20:41:34.432018 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.435070 kubelet[2682]: I0113 20:41:34.432177 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-hostproc" (OuterVolumeSpecName: "hostproc") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.436490 kubelet[2682]: I0113 20:41:34.436211 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a908f06-106f-4b55-bc43-021bd5c26297-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:41:34.436490 kubelet[2682]: I0113 20:41:34.436225 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-kube-api-access-99jsc" (OuterVolumeSpecName: "kube-api-access-99jsc") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "kube-api-access-99jsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:34.436490 kubelet[2682]: I0113 20:41:34.436282 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.436490 kubelet[2682]: I0113 20:41:34.436293 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.436490 kubelet[2682]: I0113 20:41:34.436307 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.436689 kubelet[2682]: I0113 20:41:34.436321 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.436689 kubelet[2682]: I0113 20:41:34.436345 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.436689 kubelet[2682]: I0113 20:41:34.436579 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cni-path" (OuterVolumeSpecName: "cni-path") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.436689 kubelet[2682]: I0113 20:41:34.436678 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:41:34.438432 kubelet[2682]: I0113 20:41:34.438391 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4a908f06-106f-4b55-bc43-021bd5c26297" (UID: "4a908f06-106f-4b55-bc43-021bd5c26297"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:34.439808 kubelet[2682]: I0113 20:41:34.439784 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353a00b4-f807-4712-80d0-a3f47f2ea44d-kube-api-access-mhpz7" (OuterVolumeSpecName: "kube-api-access-mhpz7") pod "353a00b4-f807-4712-80d0-a3f47f2ea44d" (UID: "353a00b4-f807-4712-80d0-a3f47f2ea44d"). InnerVolumeSpecName "kube-api-access-mhpz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:34.440933 kubelet[2682]: I0113 20:41:34.440891 2682 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/353a00b4-f807-4712-80d0-a3f47f2ea44d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "353a00b4-f807-4712-80d0-a3f47f2ea44d" (UID: "353a00b4-f807-4712-80d0-a3f47f2ea44d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:41:34.533039 kubelet[2682]: I0113 20:41:34.532986 2682 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533039 kubelet[2682]: I0113 20:41:34.533025 2682 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a908f06-106f-4b55-bc43-021bd5c26297-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533039 kubelet[2682]: I0113 20:41:34.533039 2682 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533039 kubelet[2682]: I0113 20:41:34.533051 2682 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533061 2682 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-99jsc\" (UniqueName: \"kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-kube-api-access-99jsc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533072 2682 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533083 2682 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533093 2682 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a908f06-106f-4b55-bc43-021bd5c26297-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533103 2682 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533113 2682 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533123 2682 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mhpz7\" (UniqueName: \"kubernetes.io/projected/353a00b4-f807-4712-80d0-a3f47f2ea44d-kube-api-access-mhpz7\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533296 kubelet[2682]: I0113 20:41:34.533132 2682 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533477 kubelet[2682]: I0113 20:41:34.533142 2682 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a908f06-106f-4b55-bc43-021bd5c26297-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.533477 kubelet[2682]: I0113 20:41:34.533151 2682 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/353a00b4-f807-4712-80d0-a3f47f2ea44d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:41:34.835164 kubelet[2682]: E0113 20:41:34.834962 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:34.843583 systemd[1]: Removed slice kubepods-besteffort-pod353a00b4_f807_4712_80d0_a3f47f2ea44d.slice - libcontainer container kubepods-besteffort-pod353a00b4_f807_4712_80d0_a3f47f2ea44d.slice. Jan 13 20:41:34.845054 systemd[1]: Removed slice kubepods-burstable-pod4a908f06_106f_4b55_bc43_021bd5c26297.slice - libcontainer container kubepods-burstable-pod4a908f06_106f_4b55_bc43_021bd5c26297.slice. Jan 13 20:41:34.845176 systemd[1]: kubepods-burstable-pod4a908f06_106f_4b55_bc43_021bd5c26297.slice: Consumed 7.608s CPU time. Jan 13 20:41:35.112502 systemd[1]: var-lib-kubelet-pods-353a00b4\x2df807\x2d4712\x2d80d0\x2da3f47f2ea44d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmhpz7.mount: Deactivated successfully. Jan 13 20:41:35.112649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03-rootfs.mount: Deactivated successfully. Jan 13 20:41:35.112777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d78d8110b9e31dd60741128c397e407870c2efd07ef0d810f26ded70b1514e03-shm.mount: Deactivated successfully. Jan 13 20:41:35.112905 systemd[1]: var-lib-kubelet-pods-4a908f06\x2d106f\x2d4b55\x2dbc43\x2d021bd5c26297-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d99jsc.mount: Deactivated successfully. Jan 13 20:41:35.113038 systemd[1]: var-lib-kubelet-pods-4a908f06\x2d106f\x2d4b55\x2dbc43\x2d021bd5c26297-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:41:35.113148 systemd[1]: var-lib-kubelet-pods-4a908f06\x2d106f\x2d4b55\x2dbc43\x2d021bd5c26297-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:41:35.116453 kubelet[2682]: I0113 20:41:35.116071 2682 scope.go:117] "RemoveContainer" containerID="055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276" Jan 13 20:41:35.125095 containerd[1494]: time="2025-01-13T20:41:35.123374000Z" level=info msg="RemoveContainer for \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\"" Jan 13 20:41:35.264875 containerd[1494]: time="2025-01-13T20:41:35.264781908Z" level=info msg="RemoveContainer for \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\" returns successfully" Jan 13 20:41:35.265263 kubelet[2682]: I0113 20:41:35.265219 2682 scope.go:117] "RemoveContainer" containerID="055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276" Jan 13 20:41:35.265633 containerd[1494]: time="2025-01-13T20:41:35.265495086Z" level=error msg="ContainerStatus for \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\": not found" Jan 13 20:41:35.272598 kubelet[2682]: E0113 20:41:35.272552 2682 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\": not found" containerID="055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276" Jan 13 20:41:35.272688 kubelet[2682]: I0113 20:41:35.272597 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276"} err="failed to get container status \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\": rpc error: code = NotFound desc = an error occurred when try to find container \"055fdc82c183a19b6018de89a4ec8dfdd15fd447a5b8349cde592c12da774276\": not found" Jan 13 20:41:35.272688 kubelet[2682]: I0113 20:41:35.272685 2682 scope.go:117] "RemoveContainer" containerID="da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254" Jan 13 20:41:35.273955 containerd[1494]: time="2025-01-13T20:41:35.273916002Z" level=info msg="RemoveContainer for \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\"" Jan 13 20:41:35.346768 containerd[1494]: time="2025-01-13T20:41:35.346689108Z" level=info msg="RemoveContainer for \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\" returns successfully" Jan 13 20:41:35.347048 kubelet[2682]: I0113 20:41:35.347016 2682 scope.go:117] "RemoveContainer" containerID="55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880" Jan 13 20:41:35.348069 containerd[1494]: time="2025-01-13T20:41:35.348043649Z" level=info msg="RemoveContainer for \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\"" Jan 13 20:41:35.456682 containerd[1494]: time="2025-01-13T20:41:35.456508364Z" level=info msg="RemoveContainer for \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\" returns successfully" Jan 13 20:41:35.456853 kubelet[2682]: I0113 20:41:35.456813 2682 scope.go:117] "RemoveContainer" containerID="0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e" Jan 13 20:41:35.458195 containerd[1494]: time="2025-01-13T20:41:35.458162947Z" level=info msg="RemoveContainer for \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\"" Jan 13 20:41:35.555425 containerd[1494]: time="2025-01-13T20:41:35.555364430Z" level=info msg="RemoveContainer for \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\" returns successfully" Jan 13 20:41:35.555754 kubelet[2682]: I0113 20:41:35.555699 2682 scope.go:117] "RemoveContainer" containerID="98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798" Jan 13 20:41:35.557188 containerd[1494]: time="2025-01-13T20:41:35.557153038Z" level=info msg="RemoveContainer for \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\"" Jan 13 20:41:35.643868 containerd[1494]: time="2025-01-13T20:41:35.643776275Z" level=info msg="RemoveContainer for \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\" returns successfully" Jan 13 20:41:35.644224 kubelet[2682]: I0113 20:41:35.644175 2682 scope.go:117] "RemoveContainer" containerID="643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf" Jan 13 20:41:35.645905 containerd[1494]: time="2025-01-13T20:41:35.645837122Z" level=info msg="RemoveContainer for \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\"" Jan 13 20:41:35.719178 containerd[1494]: time="2025-01-13T20:41:35.718986575Z" level=info msg="RemoveContainer for \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\" returns successfully" Jan 13 20:41:35.719395 kubelet[2682]: I0113 20:41:35.719355 2682 scope.go:117] "RemoveContainer" containerID="da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254" Jan 13 20:41:35.719689 containerd[1494]: time="2025-01-13T20:41:35.719654228Z" level=error msg="ContainerStatus for \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\": not found" Jan 13 20:41:35.719952 kubelet[2682]: E0113 20:41:35.719841 2682 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\": not found" containerID="da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254" Jan 13 20:41:35.719952 kubelet[2682]: I0113 20:41:35.719901 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254"} err="failed to get container status \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\": rpc error: code = NotFound desc = an error occurred when try to find container \"da8e56d168cac96fcaf57e4934509bed282799d187aa5e4cc3b2833cc9e20254\": not found" Jan 13 20:41:35.719952 kubelet[2682]: I0113 20:41:35.719928 2682 scope.go:117] "RemoveContainer" containerID="55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880" Jan 13 20:41:35.720129 containerd[1494]: time="2025-01-13T20:41:35.720095078Z" level=error msg="ContainerStatus for \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\": not found" Jan 13 20:41:35.720284 kubelet[2682]: E0113 20:41:35.720236 2682 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\": not found" containerID="55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880" Jan 13 20:41:35.720284 kubelet[2682]: I0113 20:41:35.720271 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880"} err="failed to get container status \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\": rpc error: code = NotFound desc = an error occurred when try to find container \"55a076f9ae7ba84bf2eb9c9947ada5c10459638e15937f8f0782f5ff560c2880\": not found" Jan 13 20:41:35.720340 kubelet[2682]: I0113 20:41:35.720293 2682 scope.go:117] "RemoveContainer" containerID="0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e" Jan 13 20:41:35.720494 containerd[1494]: time="2025-01-13T20:41:35.720460223Z" level=error msg="ContainerStatus for \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\": not found" Jan 13 20:41:35.720675 kubelet[2682]: E0113 20:41:35.720639 2682 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\": not found" containerID="0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e" Jan 13 20:41:35.720717 kubelet[2682]: I0113 20:41:35.720680 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e"} err="failed to get container status \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0516f7c72151c65a9acdeb26e58532d9b30dbf20611c988cfa540f8b1186860e\": not found" Jan 13 20:41:35.720751 kubelet[2682]: I0113 20:41:35.720722 2682 scope.go:117] "RemoveContainer" containerID="98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798" Jan 13 20:41:35.721116 containerd[1494]: time="2025-01-13T20:41:35.721061599Z" level=error msg="ContainerStatus for \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\": not found" Jan 13 20:41:35.721348 kubelet[2682]: E0113 20:41:35.721303 2682 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\": not found" containerID="98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798" Jan 13 20:41:35.721438 kubelet[2682]: I0113 20:41:35.721345 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798"} err="failed to get container status \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\": rpc error: code = NotFound desc = an error occurred when try to find container \"98a67153ed5da4a889a82f5e1b1ff1320221897d2a643dde74520a658ae5a798\": not found" Jan 13 20:41:35.721438 kubelet[2682]: I0113 20:41:35.721375 2682 scope.go:117] "RemoveContainer" containerID="643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf" Jan 13 20:41:35.721706 containerd[1494]: time="2025-01-13T20:41:35.721661232Z" level=error msg="ContainerStatus for \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\": not found" Jan 13 20:41:35.721883 kubelet[2682]: E0113 20:41:35.721836 2682 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\": not found" containerID="643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf" Jan 13 20:41:35.721943 kubelet[2682]: I0113 20:41:35.721886 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf"} err="failed to get container status \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"643ba9bd3788bbae1fd29e7332c18454cf29b493c2c07875a4189ceed3f063cf\": not found" Jan 13 20:41:35.834695 kubelet[2682]: E0113 20:41:35.834636 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:36.062179 sshd[4361]: Connection closed by 10.0.0.1 port 38464 Jan 13 20:41:36.063927 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:36.072715 systemd[1]: sshd@27-10.0.0.97:22-10.0.0.1:38464.service: Deactivated successfully. Jan 13 20:41:36.075406 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:41:36.077192 systemd-logind[1481]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:41:36.086280 systemd[1]: Started sshd@28-10.0.0.97:22-10.0.0.1:33326.service - OpenSSH per-connection server daemon (10.0.0.1:33326). Jan 13 20:41:36.087475 systemd-logind[1481]: Removed session 28. Jan 13 20:41:36.136738 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 33326 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:36.139062 sshd-session[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:36.145207 systemd-logind[1481]: New session 29 of user core. Jan 13 20:41:36.157334 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:41:36.836931 kubelet[2682]: I0113 20:41:36.836880 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353a00b4-f807-4712-80d0-a3f47f2ea44d" path="/var/lib/kubelet/pods/353a00b4-f807-4712-80d0-a3f47f2ea44d/volumes" Jan 13 20:41:36.837618 kubelet[2682]: I0113 20:41:36.837587 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" path="/var/lib/kubelet/pods/4a908f06-106f-4b55-bc43-021bd5c26297/volumes" Jan 13 20:41:37.715197 sshd[4526]: Connection closed by 10.0.0.1 port 33326 Jan 13 20:41:37.715711 sshd-session[4523]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:37.728441 systemd[1]: sshd@28-10.0.0.97:22-10.0.0.1:33326.service: Deactivated successfully. Jan 13 20:41:37.731323 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:41:37.733552 systemd-logind[1481]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:41:37.749256 systemd[1]: Started sshd@29-10.0.0.97:22-10.0.0.1:33328.service - OpenSSH per-connection server daemon (10.0.0.1:33328). Jan 13 20:41:37.750250 systemd-logind[1481]: Removed session 29. Jan 13 20:41:37.789682 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 33328 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:37.791558 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:37.796451 systemd-logind[1481]: New session 30 of user core. Jan 13 20:41:37.806093 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:41:37.834875 kubelet[2682]: E0113 20:41:37.834807 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:37.858610 sshd[4541]: Connection closed by 10.0.0.1 port 33328 Jan 13 20:41:37.858983 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:37.872177 systemd[1]: sshd@29-10.0.0.97:22-10.0.0.1:33328.service: Deactivated successfully. Jan 13 20:41:37.875478 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:41:37.878350 systemd-logind[1481]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:41:37.885134 systemd[1]: Started sshd@30-10.0.0.97:22-10.0.0.1:33340.service - OpenSSH per-connection server daemon (10.0.0.1:33340). Jan 13 20:41:37.886180 systemd-logind[1481]: Removed session 30. Jan 13 20:41:37.898720 kubelet[2682]: E0113 20:41:37.898645 2682 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:41:37.931782 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 33340 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:41:37.933862 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:37.940447 systemd-logind[1481]: New session 31 of user core. Jan 13 20:41:37.953140 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 13 20:41:38.094766 kubelet[2682]: I0113 20:41:38.093438 2682 topology_manager.go:215] "Topology Admit Handler" podUID="9948a81c-7499-4816-ad3d-6cf92ba90141" podNamespace="kube-system" podName="cilium-q7hsn" Jan 13 20:41:38.094766 kubelet[2682]: E0113 20:41:38.093526 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" containerName="mount-cgroup" Jan 13 20:41:38.094766 kubelet[2682]: E0113 20:41:38.093540 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" containerName="apply-sysctl-overwrites" Jan 13 20:41:38.094766 kubelet[2682]: E0113 20:41:38.093549 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" containerName="cilium-agent" Jan 13 20:41:38.094766 kubelet[2682]: E0113 20:41:38.093559 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="353a00b4-f807-4712-80d0-a3f47f2ea44d" containerName="cilium-operator" Jan 13 20:41:38.094766 kubelet[2682]: E0113 20:41:38.093567 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" containerName="mount-bpf-fs" Jan 13 20:41:38.094766 kubelet[2682]: E0113 20:41:38.093576 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" containerName="clean-cilium-state" Jan 13 20:41:38.094766 kubelet[2682]: I0113 20:41:38.093610 2682 memory_manager.go:354] "RemoveStaleState removing state" podUID="353a00b4-f807-4712-80d0-a3f47f2ea44d" containerName="cilium-operator" Jan 13 20:41:38.094766 kubelet[2682]: I0113 20:41:38.093620 2682 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a908f06-106f-4b55-bc43-021bd5c26297" containerName="cilium-agent" Jan 13 20:41:38.109054 systemd[1]: Created slice kubepods-burstable-pod9948a81c_7499_4816_ad3d_6cf92ba90141.slice - libcontainer container kubepods-burstable-pod9948a81c_7499_4816_ad3d_6cf92ba90141.slice. Jan 13 20:41:38.256097 kubelet[2682]: I0113 20:41:38.256031 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-etc-cni-netd\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256097 kubelet[2682]: I0113 20:41:38.256085 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-lib-modules\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256097 kubelet[2682]: I0113 20:41:38.256108 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-xtables-lock\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256314 kubelet[2682]: I0113 20:41:38.256137 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9948a81c-7499-4816-ad3d-6cf92ba90141-clustermesh-secrets\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256314 kubelet[2682]: I0113 20:41:38.256158 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-bpf-maps\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256314 kubelet[2682]: I0113 20:41:38.256181 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9948a81c-7499-4816-ad3d-6cf92ba90141-cilium-config-path\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256314 kubelet[2682]: I0113 20:41:38.256200 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-host-proc-sys-net\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256314 kubelet[2682]: I0113 20:41:38.256218 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9948a81c-7499-4816-ad3d-6cf92ba90141-hubble-tls\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256314 kubelet[2682]: I0113 20:41:38.256235 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6w2t\" (UniqueName: \"kubernetes.io/projected/9948a81c-7499-4816-ad3d-6cf92ba90141-kube-api-access-g6w2t\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256516 kubelet[2682]: I0113 20:41:38.256264 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-cilium-run\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256516 kubelet[2682]: I0113 20:41:38.256291 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9948a81c-7499-4816-ad3d-6cf92ba90141-cilium-ipsec-secrets\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256516 kubelet[2682]: I0113 20:41:38.256314 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-cilium-cgroup\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256516 kubelet[2682]: I0113 20:41:38.256332 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-cni-path\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256516 kubelet[2682]: I0113 20:41:38.256350 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-host-proc-sys-kernel\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.256516 kubelet[2682]: I0113 20:41:38.256382 2682 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9948a81c-7499-4816-ad3d-6cf92ba90141-hostproc\") pod \"cilium-q7hsn\" (UID: \"9948a81c-7499-4816-ad3d-6cf92ba90141\") " pod="kube-system/cilium-q7hsn" Jan 13 20:41:38.712798 kubelet[2682]: E0113 20:41:38.712726 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:38.713758 containerd[1494]: time="2025-01-13T20:41:38.713346794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7hsn,Uid:9948a81c-7499-4816-ad3d-6cf92ba90141,Namespace:kube-system,Attempt:0,}" Jan 13 20:41:39.250061 containerd[1494]: time="2025-01-13T20:41:39.249958954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:39.250061 containerd[1494]: time="2025-01-13T20:41:39.250014611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:39.250061 containerd[1494]: time="2025-01-13T20:41:39.250030561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:39.250257 containerd[1494]: time="2025-01-13T20:41:39.250119181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:39.271033 systemd[1]: Started cri-containerd-9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266.scope - libcontainer container 9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266. Jan 13 20:41:39.294486 containerd[1494]: time="2025-01-13T20:41:39.294355796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7hsn,Uid:9948a81c-7499-4816-ad3d-6cf92ba90141,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\"" Jan 13 20:41:39.295585 kubelet[2682]: E0113 20:41:39.295533 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:39.298489 containerd[1494]: time="2025-01-13T20:41:39.298437062Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:41:39.834945 containerd[1494]: time="2025-01-13T20:41:39.834802570Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3\"" Jan 13 20:41:39.835646 containerd[1494]: time="2025-01-13T20:41:39.835614390Z" level=info msg="StartContainer for \"ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3\"" Jan 13 20:41:39.867065 systemd[1]: Started cri-containerd-ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3.scope - libcontainer container ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3. Jan 13 20:41:39.924675 systemd[1]: cri-containerd-ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3.scope: Deactivated successfully. Jan 13 20:41:39.972093 containerd[1494]: time="2025-01-13T20:41:39.972035877Z" level=info msg="StartContainer for \"ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3\" returns successfully" Jan 13 20:41:40.132577 kubelet[2682]: E0113 20:41:40.132540 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:40.183518 containerd[1494]: time="2025-01-13T20:41:40.183430375Z" level=info msg="shim disconnected" id=ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3 namespace=k8s.io Jan 13 20:41:40.183518 containerd[1494]: time="2025-01-13T20:41:40.183504547Z" level=warning msg="cleaning up after shim disconnected" id=ba7cacb5e8bcda143f82c154bc1911de46b89cd2fc62642950f5c10342df32d3 namespace=k8s.io Jan 13 20:41:40.183518 containerd[1494]: time="2025-01-13T20:41:40.183515608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:41.136981 kubelet[2682]: E0113 20:41:41.136937 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:41.139365 containerd[1494]: time="2025-01-13T20:41:41.139318634Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:41:41.155467 containerd[1494]: time="2025-01-13T20:41:41.155417769Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6\"" Jan 13 20:41:41.156060 containerd[1494]: time="2025-01-13T20:41:41.156016003Z" level=info msg="StartContainer for \"3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6\"" Jan 13 20:41:41.190016 systemd[1]: Started cri-containerd-3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6.scope - libcontainer container 3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6. Jan 13 20:41:41.220756 containerd[1494]: time="2025-01-13T20:41:41.220701493Z" level=info msg="StartContainer for \"3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6\" returns successfully" Jan 13 20:41:41.229146 systemd[1]: cri-containerd-3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6.scope: Deactivated successfully. Jan 13 20:41:41.268611 containerd[1494]: time="2025-01-13T20:41:41.268530963Z" level=info msg="shim disconnected" id=3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6 namespace=k8s.io Jan 13 20:41:41.268611 containerd[1494]: time="2025-01-13T20:41:41.268598773Z" level=warning msg="cleaning up after shim disconnected" id=3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6 namespace=k8s.io Jan 13 20:41:41.268611 containerd[1494]: time="2025-01-13T20:41:41.268609753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:41.362871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dcc7862af97c7e291a7b49e3a5d6a0fb3201b4886ebade60f4a43ea99a18ab6-rootfs.mount: Deactivated successfully. Jan 13 20:41:42.140152 kubelet[2682]: E0113 20:41:42.140111 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:42.142361 containerd[1494]: time="2025-01-13T20:41:42.142306525Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:41:42.899785 kubelet[2682]: E0113 20:41:42.899712 2682 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:41:43.265358 containerd[1494]: time="2025-01-13T20:41:43.265152878Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9\"" Jan 13 20:41:43.266029 containerd[1494]: time="2025-01-13T20:41:43.265990542Z" level=info msg="StartContainer for \"30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9\"" Jan 13 20:41:43.304042 systemd[1]: Started cri-containerd-30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9.scope - libcontainer container 30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9. Jan 13 20:41:43.354897 systemd[1]: cri-containerd-30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9.scope: Deactivated successfully. Jan 13 20:41:43.434751 containerd[1494]: time="2025-01-13T20:41:43.434678850Z" level=info msg="StartContainer for \"30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9\" returns successfully" Jan 13 20:41:43.456195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9-rootfs.mount: Deactivated successfully. Jan 13 20:41:43.532719 containerd[1494]: time="2025-01-13T20:41:43.532550072Z" level=info msg="shim disconnected" id=30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9 namespace=k8s.io Jan 13 20:41:43.532719 containerd[1494]: time="2025-01-13T20:41:43.532605878Z" level=warning msg="cleaning up after shim disconnected" id=30687e8f21489a84f34dc105d0fef0bcdc5e20699d736d5e30b293a584fe2ef9 namespace=k8s.io Jan 13 20:41:43.532719 containerd[1494]: time="2025-01-13T20:41:43.532614235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:44.145214 kubelet[2682]: E0113 20:41:44.145166 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:44.147450 containerd[1494]: time="2025-01-13T20:41:44.147400217Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:41:44.162887 containerd[1494]: time="2025-01-13T20:41:44.162806141Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9\"" Jan 13 20:41:44.165155 containerd[1494]: time="2025-01-13T20:41:44.163674374Z" level=info msg="StartContainer for \"b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9\"" Jan 13 20:41:44.199091 systemd[1]: Started cri-containerd-b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9.scope - libcontainer container b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9. Jan 13 20:41:44.225419 systemd[1]: cri-containerd-b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9.scope: Deactivated successfully. Jan 13 20:41:44.227540 containerd[1494]: time="2025-01-13T20:41:44.227480549Z" level=info msg="StartContainer for \"b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9\" returns successfully" Jan 13 20:41:44.248062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9-rootfs.mount: Deactivated successfully. Jan 13 20:41:44.252411 containerd[1494]: time="2025-01-13T20:41:44.252350208Z" level=info msg="shim disconnected" id=b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9 namespace=k8s.io Jan 13 20:41:44.252513 containerd[1494]: time="2025-01-13T20:41:44.252409311Z" level=warning msg="cleaning up after shim disconnected" id=b84db3c7f6053609508e9b16e5ef227f6793e8f6498750de4e66ca2eb31ff3f9 namespace=k8s.io Jan 13 20:41:44.252513 containerd[1494]: time="2025-01-13T20:41:44.252420964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:45.150609 kubelet[2682]: E0113 20:41:45.150561 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:45.154138 containerd[1494]: time="2025-01-13T20:41:45.154077686Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:41:45.221222 kubelet[2682]: I0113 20:41:45.220830 2682 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:41:45Z","lastTransitionTime":"2025-01-13T20:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:41:45.320173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974651729.mount: Deactivated successfully. Jan 13 20:41:45.442812 containerd[1494]: time="2025-01-13T20:41:45.442587819Z" level=info msg="CreateContainer within sandbox \"9e841270c0d15116e7f4ff336823aed64b8cac8ab0671fd3d8b5a0b9cacdf266\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"97f3282562f382eba3647c7e3e3fa3c935a45ecd90f59a31453db53db44211b5\"" Jan 13 20:41:45.443438 containerd[1494]: time="2025-01-13T20:41:45.443391429Z" level=info msg="StartContainer for \"97f3282562f382eba3647c7e3e3fa3c935a45ecd90f59a31453db53db44211b5\"" Jan 13 20:41:45.477008 systemd[1]: Started cri-containerd-97f3282562f382eba3647c7e3e3fa3c935a45ecd90f59a31453db53db44211b5.scope - libcontainer container 97f3282562f382eba3647c7e3e3fa3c935a45ecd90f59a31453db53db44211b5. Jan 13 20:41:45.623511 containerd[1494]: time="2025-01-13T20:41:45.623446408Z" level=info msg="StartContainer for \"97f3282562f382eba3647c7e3e3fa3c935a45ecd90f59a31453db53db44211b5\" returns successfully" Jan 13 20:41:45.989897 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:41:46.177363 kubelet[2682]: E0113 20:41:46.176428 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:47.178105 kubelet[2682]: E0113 20:41:47.178060 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:49.410315 systemd-networkd[1414]: lxc_health: Link UP Jan 13 20:41:49.421434 systemd-networkd[1414]: lxc_health: Gained carrier Jan 13 20:41:50.715171 kubelet[2682]: E0113 20:41:50.715073 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:50.731102 kubelet[2682]: I0113 20:41:50.730000 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q7hsn" podStartSLOduration=13.729976952 podStartE2EDuration="13.729976952s" podCreationTimestamp="2025-01-13 20:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:41:46.19373753 +0000 UTC m=+103.442776090" watchObservedRunningTime="2025-01-13 20:41:50.729976952 +0000 UTC m=+107.979015502" Jan 13 20:41:51.120055 systemd-networkd[1414]: lxc_health: Gained IPv6LL Jan 13 20:41:51.186606 kubelet[2682]: E0113 20:41:51.186562 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:52.188944 kubelet[2682]: E0113 20:41:52.188897 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:53.868856 systemd[1]: run-containerd-runc-k8s.io-97f3282562f382eba3647c7e3e3fa3c935a45ecd90f59a31453db53db44211b5-runc.sUAt3I.mount: Deactivated successfully. Jan 13 20:41:56.027997 kubelet[2682]: E0113 20:41:56.027916 2682 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45626->127.0.0.1:46387: write tcp 127.0.0.1:45626->127.0.0.1:46387: write: broken pipe Jan 13 20:41:56.031894 sshd[4549]: Connection closed by 10.0.0.1 port 33340 Jan 13 20:41:56.032262 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:56.036056 systemd[1]: sshd@30-10.0.0.97:22-10.0.0.1:33340.service: Deactivated successfully. Jan 13 20:41:56.038307 systemd[1]: session-31.scope: Deactivated successfully. Jan 13 20:41:56.039029 systemd-logind[1481]: Session 31 logged out. Waiting for processes to exit. Jan 13 20:41:56.039923 systemd-logind[1481]: Removed session 31.