Feb 13 15:28:11.923228 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:28:11.923251 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:28:11.923262 kernel: BIOS-provided physical RAM map: Feb 13 15:28:11.923269 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:28:11.923275 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:28:11.923282 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:28:11.923289 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 15:28:11.923296 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 15:28:11.923302 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 15:28:11.923311 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 15:28:11.923318 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:28:11.923324 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:28:11.923333 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:28:11.923340 kernel: NX (Execute Disable) protection: active Feb 13 15:28:11.923348 kernel: APIC: Static calls initialized Feb 13 15:28:11.923358 kernel: SMBIOS 2.8 present. Feb 13 15:28:11.923365 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 15:28:11.923372 kernel: Hypervisor detected: KVM Feb 13 15:28:11.923379 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:28:11.923386 kernel: kvm-clock: using sched offset of 3709692054 cycles Feb 13 15:28:11.923394 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:28:11.923401 kernel: tsc: Detected 2794.750 MHz processor Feb 13 15:28:11.923409 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:28:11.923417 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:28:11.923424 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 15:28:11.923434 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:28:11.923441 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:28:11.923449 kernel: Using GB pages for direct mapping Feb 13 15:28:11.923456 kernel: ACPI: Early table checksum verification disabled Feb 13 15:28:11.923463 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 15:28:11.923470 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:28:11.923478 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:28:11.923485 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:28:11.923492 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 15:28:11.923501 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:28:11.923509 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:28:11.923516 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:28:11.923523 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:28:11.923530 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 15:28:11.923538 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 15:28:11.923549 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 15:28:11.923559 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 15:28:11.923569 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 15:28:11.923579 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 15:28:11.923604 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 15:28:11.923648 kernel: No NUMA configuration found Feb 13 15:28:11.923680 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 15:28:11.923704 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 15:28:11.923738 kernel: Zone ranges: Feb 13 15:28:11.923763 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:28:11.923788 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 15:28:11.923811 kernel: Normal empty Feb 13 15:28:11.923867 kernel: Movable zone start for each node Feb 13 15:28:11.923878 kernel: Early memory node ranges Feb 13 15:28:11.923886 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:28:11.923894 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 15:28:11.923901 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 15:28:11.923912 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:28:11.923922 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:28:11.923930 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 15:28:11.923937 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:28:11.923945 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:28:11.923952 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:28:11.923959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:28:11.923967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:28:11.923974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:28:11.923984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:28:11.923991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:28:11.923999 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:28:11.924006 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:28:11.924014 kernel: TSC deadline timer available Feb 13 15:28:11.924021 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:28:11.924028 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:28:11.924036 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:28:11.924043 kernel: kvm-guest: setup PV sched yield Feb 13 15:28:11.924050 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 15:28:11.924060 kernel: Booting paravirtualized kernel on KVM Feb 13 15:28:11.924068 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:28:11.924076 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:28:11.924083 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:28:11.924091 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:28:11.924098 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:28:11.924106 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:28:11.924113 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:28:11.924122 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:28:11.924133 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:28:11.924140 kernel: random: crng init done Feb 13 15:28:11.924148 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:28:11.924155 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:28:11.924162 kernel: Fallback order for Node 0: 0 Feb 13 15:28:11.924170 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 15:28:11.924177 kernel: Policy zone: DMA32 Feb 13 15:28:11.924185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:28:11.924195 kernel: Memory: 2432540K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 138952K reserved, 0K cma-reserved) Feb 13 15:28:11.924203 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:28:11.924210 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:28:11.924218 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:28:11.924225 kernel: Dynamic Preempt: voluntary Feb 13 15:28:11.924232 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:28:11.924243 kernel: rcu: RCU event tracing is enabled. Feb 13 15:28:11.924251 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:28:11.924259 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:28:11.924269 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:28:11.924277 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:28:11.924284 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:28:11.924295 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:28:11.924305 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:28:11.924315 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:28:11.924322 kernel: Console: colour VGA+ 80x25 Feb 13 15:28:11.924329 kernel: printk: console [ttyS0] enabled Feb 13 15:28:11.924337 kernel: ACPI: Core revision 20230628 Feb 13 15:28:11.924348 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:28:11.924355 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:28:11.924362 kernel: x2apic enabled Feb 13 15:28:11.924370 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:28:11.924377 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:28:11.924385 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:28:11.924393 kernel: kvm-guest: setup PV IPIs Feb 13 15:28:11.924410 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:28:11.924418 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:28:11.924426 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 15:28:11.924433 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:28:11.924441 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:28:11.924451 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:28:11.924459 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:28:11.924467 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:28:11.924475 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:28:11.924482 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:28:11.924493 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:28:11.924500 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:28:11.924508 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:28:11.924516 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:28:11.924524 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:28:11.924532 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:28:11.924540 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:28:11.924548 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:28:11.924558 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:28:11.924566 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:28:11.924573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:28:11.924581 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:28:11.924589 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:28:11.924596 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:28:11.924604 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:28:11.924612 kernel: landlock: Up and running. Feb 13 15:28:11.924619 kernel: SELinux: Initializing. Feb 13 15:28:11.924630 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:28:11.924638 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:28:11.924655 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:28:11.924663 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:28:11.924671 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:28:11.924679 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:28:11.924686 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:28:11.924696 kernel: ... version: 0 Feb 13 15:28:11.924704 kernel: ... bit width: 48 Feb 13 15:28:11.924714 kernel: ... generic registers: 6 Feb 13 15:28:11.924722 kernel: ... value mask: 0000ffffffffffff Feb 13 15:28:11.924730 kernel: ... max period: 00007fffffffffff Feb 13 15:28:11.924737 kernel: ... fixed-purpose events: 0 Feb 13 15:28:11.924745 kernel: ... event mask: 000000000000003f Feb 13 15:28:11.924753 kernel: signal: max sigframe size: 1776 Feb 13 15:28:11.924760 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:28:11.924768 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:28:11.924776 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:28:11.924786 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:28:11.924794 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:28:11.924801 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:28:11.924809 kernel: smpboot: Max logical packages: 1 Feb 13 15:28:11.924816 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 15:28:11.924824 kernel: devtmpfs: initialized Feb 13 15:28:11.924849 kernel: x86/mm: Memory block size: 128MB Feb 13 15:28:11.924857 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:28:11.924865 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:28:11.924876 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:28:11.924883 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:28:11.924891 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:28:11.924899 kernel: audit: type=2000 audit(1739460491.376:1): state=initialized audit_enabled=0 res=1 Feb 13 15:28:11.924906 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:28:11.924914 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:28:11.924922 kernel: cpuidle: using governor menu Feb 13 15:28:11.924929 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:28:11.924937 kernel: dca service started, version 1.12.1 Feb 13 15:28:11.924948 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 15:28:11.924956 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 15:28:11.924963 kernel: PCI: Using configuration type 1 for base access Feb 13 15:28:11.924971 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:28:11.924979 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:28:11.924987 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:28:11.924995 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:28:11.925003 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:28:11.925010 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:28:11.925020 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:28:11.925028 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:28:11.925036 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:28:11.925044 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:28:11.925051 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:28:11.925059 kernel: ACPI: Interpreter enabled Feb 13 15:28:11.925066 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:28:11.925074 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:28:11.925082 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:28:11.925092 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:28:11.925100 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:28:11.925107 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:28:11.925334 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:28:11.925479 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:28:11.925613 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:28:11.925623 kernel: PCI host bridge to bus 0000:00 Feb 13 15:28:11.925782 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:28:11.925986 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:28:11.926108 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:28:11.926227 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 15:28:11.926345 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 15:28:11.926463 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 15:28:11.926581 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:28:11.926756 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:28:11.926924 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:28:11.927059 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 15:28:11.927191 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 15:28:11.927320 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 15:28:11.927451 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:28:11.927603 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:28:11.927788 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 15:28:11.927960 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 15:28:11.928095 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 15:28:11.928251 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:28:11.928388 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 15:28:11.928519 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 15:28:11.928667 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 15:28:11.928818 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:28:11.928973 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 15:28:11.929108 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 15:28:11.929239 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 15:28:11.929370 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 15:28:11.929521 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:28:11.929670 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:28:11.929859 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:28:11.930029 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 15:28:11.930176 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 15:28:11.930337 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:28:11.930471 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 15:28:11.930482 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:28:11.930496 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:28:11.930504 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:28:11.930512 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:28:11.930520 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:28:11.930527 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:28:11.930535 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:28:11.930543 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:28:11.930550 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:28:11.930558 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:28:11.930569 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:28:11.930576 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:28:11.930584 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:28:11.930592 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:28:11.930599 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:28:11.930607 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:28:11.930615 kernel: iommu: Default domain type: Translated Feb 13 15:28:11.930623 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:28:11.930630 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:28:11.930650 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:28:11.930660 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:28:11.930677 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 15:28:11.930863 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:28:11.930999 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:28:11.931141 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:28:11.931153 kernel: vgaarb: loaded Feb 13 15:28:11.931161 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:28:11.931174 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:28:11.931182 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:28:11.931189 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:28:11.931197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:28:11.931205 kernel: pnp: PnP ACPI init Feb 13 15:28:11.931376 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 15:28:11.931388 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:28:11.931396 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:28:11.931408 kernel: NET: Registered PF_INET protocol family Feb 13 15:28:11.931416 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:28:11.931424 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:28:11.931431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:28:11.931439 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:28:11.931447 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:28:11.931455 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:28:11.931463 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:28:11.931471 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:28:11.931481 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:28:11.931489 kernel: NET: Registered PF_XDP protocol family Feb 13 15:28:11.931612 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:28:11.931743 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:28:11.931978 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:28:11.932222 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 15:28:11.932372 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 15:28:11.932519 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 15:28:11.932540 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:28:11.932548 kernel: Initialise system trusted keyrings Feb 13 15:28:11.932557 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:28:11.932568 kernel: Key type asymmetric registered Feb 13 15:28:11.932577 kernel: Asymmetric key parser 'x509' registered Feb 13 15:28:11.932587 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:28:11.932597 kernel: io scheduler mq-deadline registered Feb 13 15:28:11.932606 kernel: io scheduler kyber registered Feb 13 15:28:11.932616 kernel: io scheduler bfq registered Feb 13 15:28:11.932629 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:28:11.932640 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:28:11.932660 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:28:11.932670 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:28:11.932680 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:28:11.932691 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:28:11.932701 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:28:11.932710 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:28:11.932717 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:28:11.932884 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:28:11.932902 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:28:11.933040 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:28:11.933191 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:28:11 UTC (1739460491) Feb 13 15:28:11.933404 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 15:28:11.933415 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:28:11.933423 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:28:11.933431 kernel: Segment Routing with IPv6 Feb 13 15:28:11.933444 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:28:11.933452 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:28:11.933459 kernel: Key type dns_resolver registered Feb 13 15:28:11.933467 kernel: IPI shorthand broadcast: enabled Feb 13 15:28:11.933475 kernel: sched_clock: Marking stable (725003146, 113350993)->(899384235, -61030096) Feb 13 15:28:11.933483 kernel: registered taskstats version 1 Feb 13 15:28:11.933490 kernel: Loading compiled-in X.509 certificates Feb 13 15:28:11.933498 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:28:11.933506 kernel: Key type .fscrypt registered Feb 13 15:28:11.933516 kernel: Key type fscrypt-provisioning registered Feb 13 15:28:11.933524 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:28:11.933532 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:28:11.933540 kernel: ima: No architecture policies found Feb 13 15:28:11.933547 kernel: clk: Disabling unused clocks Feb 13 15:28:11.933555 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:28:11.933563 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:28:11.933571 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:28:11.933579 kernel: Run /init as init process Feb 13 15:28:11.933589 kernel: with arguments: Feb 13 15:28:11.933597 kernel: /init Feb 13 15:28:11.933604 kernel: with environment: Feb 13 15:28:11.933612 kernel: HOME=/ Feb 13 15:28:11.933619 kernel: TERM=linux Feb 13 15:28:11.933627 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:28:11.933636 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:28:11.933657 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:28:11.933679 systemd[1]: Detected virtualization kvm. Feb 13 15:28:11.933688 systemd[1]: Detected architecture x86-64. Feb 13 15:28:11.933705 systemd[1]: Running in initrd. Feb 13 15:28:11.933714 systemd[1]: No hostname configured, using default hostname. Feb 13 15:28:11.933723 systemd[1]: Hostname set to . Feb 13 15:28:11.933731 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:28:11.933739 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:28:11.933748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:28:11.933760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:28:11.933782 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:28:11.933793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:28:11.933802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:28:11.933812 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:28:11.933828 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:28:11.933855 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:28:11.933867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:28:11.933878 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:28:11.933894 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:28:11.933909 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:28:11.933918 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:28:11.933927 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:28:11.933940 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:28:11.933949 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:28:11.933958 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:28:11.933966 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:28:11.933975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:28:11.933983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:28:11.933992 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:28:11.934000 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:28:11.934009 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:28:11.934020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:28:11.934029 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:28:11.934037 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:28:11.934046 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:28:11.934054 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:28:11.934063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:28:11.934071 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:28:11.934080 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:28:11.934091 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:28:11.934100 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:28:11.934146 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 15:28:11.934167 systemd-journald[192]: Journal started Feb 13 15:28:11.934194 systemd-journald[192]: Runtime Journal (/run/log/journal/88df34d37ea94d66baaaf64fe5a3e711) is 6M, max 48.4M, 42.3M free. Feb 13 15:28:11.933735 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:28:11.958594 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:28:11.957550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:28:11.968111 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:28:11.968128 kernel: Bridge firewalling registered Feb 13 15:28:11.968078 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:28:11.969984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:28:11.972327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:28:11.973918 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:28:11.976681 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:28:11.982070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:28:11.983506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:28:11.985614 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:28:11.990882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:28:11.992401 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:28:11.998904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:28:12.008273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:28:12.008810 dracut-cmdline[224]: dracut-dracut-053 Feb 13 15:28:12.011762 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:28:12.022041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:28:12.059089 systemd-resolved[246]: Positive Trust Anchors: Feb 13 15:28:12.059107 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:28:12.059138 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:28:12.061935 systemd-resolved[246]: Defaulting to hostname 'linux'. Feb 13 15:28:12.063236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:28:12.069399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:28:12.118870 kernel: SCSI subsystem initialized Feb 13 15:28:12.128860 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:28:12.139857 kernel: iscsi: registered transport (tcp) Feb 13 15:28:12.162866 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:28:12.162896 kernel: QLogic iSCSI HBA Driver Feb 13 15:28:12.221208 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:28:12.230016 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:28:12.257281 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:28:12.257312 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:28:12.258337 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:28:12.301877 kernel: raid6: avx2x4 gen() 29947 MB/s Feb 13 15:28:12.318858 kernel: raid6: avx2x2 gen() 23463 MB/s Feb 13 15:28:12.336194 kernel: raid6: avx2x1 gen() 18112 MB/s Feb 13 15:28:12.336235 kernel: raid6: using algorithm avx2x4 gen() 29947 MB/s Feb 13 15:28:12.354013 kernel: raid6: .... xor() 6185 MB/s, rmw enabled Feb 13 15:28:12.354047 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:28:12.375874 kernel: xor: automatically using best checksumming function avx Feb 13 15:28:12.541882 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:28:12.556511 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:28:12.569085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:28:12.586691 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 15:28:12.593800 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:28:12.603189 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:28:12.618514 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Feb 13 15:28:12.661156 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:28:12.674220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:28:12.748790 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:28:12.758100 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:28:12.776051 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:28:12.780586 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:28:12.783763 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:28:12.786497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:28:12.794939 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:28:12.805437 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:28:12.805638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:28:12.805654 kernel: GPT:9289727 != 19775487 Feb 13 15:28:12.805678 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:28:12.805691 kernel: GPT:9289727 != 19775487 Feb 13 15:28:12.805703 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:28:12.805716 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:28:12.801081 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:28:12.818128 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:28:12.822943 kernel: libata version 3.00 loaded. Feb 13 15:28:12.832856 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:28:12.891422 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:28:12.891444 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:28:12.891459 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:28:12.891646 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:28:12.891855 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:28:12.891869 kernel: AES CTR mode by8 optimization enabled Feb 13 15:28:12.891879 kernel: scsi host0: ahci Feb 13 15:28:12.892050 kernel: scsi host1: ahci Feb 13 15:28:12.892206 kernel: scsi host2: ahci Feb 13 15:28:12.892358 kernel: scsi host3: ahci Feb 13 15:28:12.892556 kernel: scsi host4: ahci Feb 13 15:28:12.892779 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (464) Feb 13 15:28:12.892795 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (470) Feb 13 15:28:12.892807 kernel: scsi host5: ahci Feb 13 15:28:12.893026 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 15:28:12.893043 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 15:28:12.893057 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 15:28:12.893069 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 15:28:12.893079 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 15:28:12.893094 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 15:28:12.838160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:28:12.838284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:28:12.839899 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:28:12.841040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:28:12.841264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:28:12.843680 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:28:12.855085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:28:12.894577 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:28:12.929413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:28:12.953698 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:28:12.962786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:28:12.971277 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:28:12.972622 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:28:12.987089 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:28:12.990658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:28:12.998077 disk-uuid[554]: Primary Header is updated. Feb 13 15:28:12.998077 disk-uuid[554]: Secondary Entries is updated. Feb 13 15:28:12.998077 disk-uuid[554]: Secondary Header is updated. Feb 13 15:28:13.001857 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:28:13.006850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:28:13.026986 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:28:13.197881 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:28:13.205907 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:28:13.206006 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:28:13.206020 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:28:13.206860 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:28:13.207864 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:28:13.208865 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:28:13.208886 kernel: ata3.00: applying bridge limits Feb 13 15:28:13.209988 kernel: ata3.00: configured for UDMA/100 Feb 13 15:28:13.210855 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:28:13.263871 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:28:13.280691 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:28:13.280707 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:28:14.008873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:28:14.008963 disk-uuid[555]: The operation has completed successfully. Feb 13 15:28:14.045760 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:28:14.045970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:28:14.104091 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:28:14.107651 sh[591]: Success Feb 13 15:28:14.120892 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:28:14.161558 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:28:14.174615 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:28:14.177999 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:28:14.188381 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:28:14.188428 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:28:14.188440 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:28:14.189409 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:28:14.190207 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:28:14.195254 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:28:14.196337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:28:14.203042 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:28:14.205304 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:28:14.216873 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:28:14.216911 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:28:14.218432 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:28:14.220862 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:28:14.230810 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:28:14.232592 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:28:14.242288 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:28:14.247017 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:28:14.387345 ignition[682]: Ignition 2.20.0 Feb 13 15:28:14.387357 ignition[682]: Stage: fetch-offline Feb 13 15:28:14.387407 ignition[682]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:28:14.387418 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:28:14.387513 ignition[682]: parsed url from cmdline: "" Feb 13 15:28:14.387517 ignition[682]: no config URL provided Feb 13 15:28:14.387522 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:28:14.387532 ignition[682]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:28:14.387560 ignition[682]: op(1): [started] loading QEMU firmware config module Feb 13 15:28:14.387565 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:28:14.399308 ignition[682]: op(1): [finished] loading QEMU firmware config module Feb 13 15:28:14.401631 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:28:14.411010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:28:14.438255 systemd-networkd[781]: lo: Link UP Feb 13 15:28:14.438266 systemd-networkd[781]: lo: Gained carrier Feb 13 15:28:14.440041 systemd-networkd[781]: Enumeration completed Feb 13 15:28:14.440591 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:28:14.440951 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:28:14.440956 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:28:14.442506 systemd-networkd[781]: eth0: Link UP Feb 13 15:28:14.442510 systemd-networkd[781]: eth0: Gained carrier Feb 13 15:28:14.448425 ignition[682]: parsing config with SHA512: 7221d2a42584169ef3a6a1211ef1ca324957c21cf6f23e8e6f8e151a9246e88c4b116da37f1312325dc0f9f505d8a174bbabde4f69ff9f4cc4ac82adcabd9937 Feb 13 15:28:14.442517 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:28:14.443741 systemd[1]: Reached target network.target - Network. Feb 13 15:28:14.452162 unknown[682]: fetched base config from "system" Feb 13 15:28:14.452171 unknown[682]: fetched user config from "qemu" Feb 13 15:28:14.453810 ignition[682]: fetch-offline: fetch-offline passed Feb 13 15:28:14.453977 ignition[682]: Ignition finished successfully Feb 13 15:28:14.456950 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:28:14.457420 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:28:14.459205 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:28:14.463054 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:28:14.483930 ignition[786]: Ignition 2.20.0 Feb 13 15:28:14.483941 ignition[786]: Stage: kargs Feb 13 15:28:14.484092 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:28:14.484104 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:28:14.484997 ignition[786]: kargs: kargs passed Feb 13 15:28:14.485041 ignition[786]: Ignition finished successfully Feb 13 15:28:14.488400 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:28:14.499954 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:28:14.525424 ignition[795]: Ignition 2.20.0 Feb 13 15:28:14.525436 ignition[795]: Stage: disks Feb 13 15:28:14.525648 ignition[795]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:28:14.525660 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:28:14.526523 ignition[795]: disks: disks passed Feb 13 15:28:14.528990 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:28:14.526579 ignition[795]: Ignition finished successfully Feb 13 15:28:14.530780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:28:14.532712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:28:14.534666 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:28:14.536809 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:28:14.538998 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:28:14.546204 systemd-resolved[246]: Detected conflict on linux IN A 10.0.0.34 Feb 13 15:28:14.546221 systemd-resolved[246]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Feb 13 15:28:14.550971 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:28:14.563171 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:28:14.569667 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:28:15.195955 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:28:15.300877 kernel: EXT4-fs (vda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:28:15.301877 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:28:15.303577 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:28:15.318921 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:28:15.325596 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:28:15.327161 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:28:15.336282 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Feb 13 15:28:15.336337 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:28:15.336354 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:28:15.336369 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:28:15.336384 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:28:15.327209 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:28:15.327237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:28:15.334899 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:28:15.339657 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:28:15.341668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:28:15.384756 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:28:15.401565 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:28:15.406458 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:28:15.410464 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:28:15.506342 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:28:15.517925 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:28:15.519611 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:28:15.527877 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:28:15.603100 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:28:15.614645 ignition[927]: INFO : Ignition 2.20.0 Feb 13 15:28:15.614645 ignition[927]: INFO : Stage: mount Feb 13 15:28:15.616305 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:28:15.616305 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:28:15.616305 ignition[927]: INFO : mount: mount passed Feb 13 15:28:15.616305 ignition[927]: INFO : Ignition finished successfully Feb 13 15:28:15.622159 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:28:15.640944 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:28:15.791043 systemd-networkd[781]: eth0: Gained IPv6LL Feb 13 15:28:16.188462 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:28:16.202072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:28:16.211481 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Feb 13 15:28:16.211547 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:28:16.211561 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:28:16.212550 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:28:16.234864 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:28:16.236953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:28:16.267032 ignition[958]: INFO : Ignition 2.20.0 Feb 13 15:28:16.267032 ignition[958]: INFO : Stage: files Feb 13 15:28:16.269079 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:28:16.269079 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:28:16.269079 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:28:16.272724 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:28:16.272724 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:28:16.272724 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:28:16.272724 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:28:16.278996 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:28:16.278996 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:28:16.278996 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:28:16.272753 unknown[958]: wrote ssh authorized keys file for user: core Feb 13 15:28:16.311658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:28:16.435824 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:28:16.437982 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:28:16.437982 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:28:16.952562 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:28:17.120857 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:28:17.120857 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:28:17.124625 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:28:17.526866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:28:18.117617 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:28:18.117617 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:28:18.122212 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:28:18.124058 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:28:18.124058 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:28:18.124058 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:28:18.124058 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:28:18.124058 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:28:18.124058 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:28:18.124058 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:28:18.161624 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:28:18.166439 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:28:18.168441 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:28:18.168441 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:28:18.168441 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:28:18.168441 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:28:18.168441 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:28:18.168441 ignition[958]: INFO : files: files passed Feb 13 15:28:18.168441 ignition[958]: INFO : Ignition finished successfully Feb 13 15:28:18.181279 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:28:18.191123 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:28:18.194428 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:28:18.197611 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:28:18.198791 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:28:18.204987 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:28:18.209211 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:28:18.209211 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:28:18.212791 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:28:18.216133 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:28:18.219313 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:28:18.230027 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:28:18.256415 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:28:18.256569 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:28:18.259088 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:28:18.261390 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:28:18.261914 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:28:18.273975 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:28:18.289912 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:28:18.291712 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:28:18.305561 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:28:18.306961 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:28:18.309416 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:28:18.311680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:28:18.311804 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:28:18.314315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:28:18.316251 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:28:18.318686 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:28:18.321196 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:28:18.323594 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:28:18.326219 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:28:18.328775 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:28:18.331584 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:28:18.333989 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:28:18.336555 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:28:18.338581 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:28:18.338730 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:28:18.341126 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:28:18.342765 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:28:18.344914 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:28:18.345037 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:28:18.347133 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:28:18.347248 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:28:18.349502 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:28:18.349616 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:28:18.351645 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:28:18.353391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:28:18.353561 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:28:18.356237 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:28:18.358220 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:28:18.360231 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:28:18.360330 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:28:18.362279 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:28:18.362370 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:28:18.364623 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:28:18.364794 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:28:18.366862 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:28:18.366994 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:28:18.379118 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:28:18.383749 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:28:18.385752 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:28:18.386067 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:28:18.388625 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:28:18.388781 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:28:18.398788 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:28:18.399580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:28:18.415661 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:28:18.440727 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 15:28:18.440727 ignition[1013]: INFO : Stage: umount Feb 13 15:28:18.442948 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:28:18.442948 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:28:18.446058 ignition[1013]: INFO : umount: umount passed Feb 13 15:28:18.447009 ignition[1013]: INFO : Ignition finished successfully Feb 13 15:28:18.450310 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:28:18.450464 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:28:18.452769 systemd[1]: Stopped target network.target - Network. Feb 13 15:28:18.453212 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:28:18.453287 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:28:18.453634 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:28:18.453698 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:28:18.454193 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:28:18.454255 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:28:18.454584 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:28:18.454646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:28:18.455269 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:28:18.455707 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:28:18.474426 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:28:18.474596 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:28:18.478700 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:28:18.479005 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:28:18.479163 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:28:18.485089 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:28:18.486070 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:28:18.486144 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:28:18.503049 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:28:18.503337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:28:18.503416 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:28:18.505549 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:28:18.505631 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:28:18.509623 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:28:18.509694 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:28:18.510091 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:28:18.510155 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:28:18.514932 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:28:18.516632 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:28:18.516730 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:28:18.533713 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:28:18.533905 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:28:18.556233 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:28:18.556441 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:28:18.559407 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:28:18.559498 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:28:18.563926 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:28:18.564017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:28:18.564532 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:28:18.564590 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:28:18.565608 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:28:18.565663 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:28:18.566556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:28:18.566613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:28:18.583070 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:28:18.585692 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:28:18.585780 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:28:18.589722 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:28:18.589793 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:28:18.590250 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:28:18.590301 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:28:18.590620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:28:18.590670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:28:18.600016 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:28:18.600143 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:28:18.600718 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:28:18.600857 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:28:19.013322 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:28:19.013481 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:28:19.015753 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:28:19.016790 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:28:19.016873 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:28:19.030977 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:28:19.039311 systemd[1]: Switching root. Feb 13 15:28:19.075373 systemd-journald[192]: Journal stopped Feb 13 15:28:20.658847 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 15:28:20.658925 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:28:20.658947 kernel: SELinux: policy capability open_perms=1 Feb 13 15:28:20.658958 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:28:20.658975 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:28:20.658987 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:28:20.658999 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:28:20.659016 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:28:20.659028 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:28:20.659045 kernel: audit: type=1403 audit(1739460499.679:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:28:20.659065 systemd[1]: Successfully loaded SELinux policy in 116.059ms. Feb 13 15:28:20.659088 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.465ms. Feb 13 15:28:20.659102 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:28:20.659131 systemd[1]: Detected virtualization kvm. Feb 13 15:28:20.659144 systemd[1]: Detected architecture x86-64. Feb 13 15:28:20.659156 systemd[1]: Detected first boot. Feb 13 15:28:20.659169 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:28:20.659182 zram_generator::config[1061]: No configuration found. Feb 13 15:28:20.659196 kernel: Guest personality initialized and is inactive Feb 13 15:28:20.659208 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:28:20.659220 kernel: Initialized host personality Feb 13 15:28:20.659235 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:28:20.659246 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:28:20.659260 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:28:20.659273 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:28:20.659286 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:28:20.659298 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:28:20.659311 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:28:20.659324 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:28:20.659336 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:28:20.659351 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:28:20.659364 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:28:20.659377 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:28:20.659389 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:28:20.659417 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:28:20.659431 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:28:20.659444 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:28:20.659457 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:28:20.659469 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:28:20.659485 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:28:20.659498 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:28:20.659510 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:28:20.659523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:28:20.659536 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:28:20.659548 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:28:20.659561 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:28:20.659576 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:28:20.659588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:28:20.659601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:28:20.659614 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:28:20.659626 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:28:20.659638 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:28:20.659652 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:28:20.659664 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:28:20.659677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:28:20.659700 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:28:20.659713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:28:20.659725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:28:20.659738 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:28:20.659751 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:28:20.659763 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:28:20.659776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:28:20.659789 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:28:20.659802 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:28:20.659817 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:28:20.659844 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:28:20.659857 systemd[1]: Reached target machines.target - Containers. Feb 13 15:28:20.659870 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:28:20.659883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:28:20.659895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:28:20.659908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:28:20.659921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:28:20.659937 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:28:20.659949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:28:20.659962 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:28:20.659975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:28:20.659995 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:28:20.660008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:28:20.660021 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:28:20.660034 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:28:20.660046 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:28:20.660063 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:28:20.660076 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:28:20.660089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:28:20.660104 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:28:20.660116 kernel: fuse: init (API version 7.39) Feb 13 15:28:20.660128 kernel: loop: module loaded Feb 13 15:28:20.660141 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:28:20.660153 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:28:20.660195 systemd-journald[1125]: Collecting audit messages is disabled. Feb 13 15:28:20.660224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:28:20.660237 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:28:20.660250 systemd[1]: Stopped verity-setup.service. Feb 13 15:28:20.660263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:28:20.660279 systemd-journald[1125]: Journal started Feb 13 15:28:20.660301 systemd-journald[1125]: Runtime Journal (/run/log/journal/88df34d37ea94d66baaaf64fe5a3e711) is 6M, max 48.4M, 42.3M free. Feb 13 15:28:20.365659 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:28:20.381267 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:28:20.381879 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:28:20.664859 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:28:20.666764 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:28:20.668721 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:28:20.670167 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:28:20.672143 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:28:20.726146 kernel: ACPI: bus type drm_connector registered Feb 13 15:28:20.674154 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:28:20.675402 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:28:20.676772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:28:20.678416 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:28:20.678643 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:28:20.680180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:28:20.680390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:28:20.681871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:28:20.682090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:28:20.683698 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:28:20.683946 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:28:20.685403 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:28:20.685611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:28:20.687079 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:28:20.688547 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:28:20.728199 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:28:20.728512 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:28:20.748324 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:28:20.751480 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:28:20.752754 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:28:20.752784 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:28:20.754943 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:28:20.757811 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:28:20.762378 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:28:20.763656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:28:20.765410 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:28:20.768126 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:28:20.769402 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:28:20.777006 systemd-journald[1125]: Time spent on flushing to /var/log/journal/88df34d37ea94d66baaaf64fe5a3e711 is 14.091ms for 964 entries. Feb 13 15:28:20.777006 systemd-journald[1125]: System Journal (/var/log/journal/88df34d37ea94d66baaaf64fe5a3e711) is 8M, max 195.6M, 187.6M free. Feb 13 15:28:20.821752 systemd-journald[1125]: Received client request to flush runtime journal. Feb 13 15:28:20.774807 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:28:20.775965 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:28:20.778271 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:28:20.780484 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:28:20.787422 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:28:20.791421 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:28:20.795552 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:28:20.797181 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:28:20.800131 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:28:20.807183 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:28:20.808748 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:28:20.810871 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:28:20.817861 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:28:20.819203 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:28:20.828084 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:28:20.829890 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:28:20.833871 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 15:28:20.835715 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:28:20.856596 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:28:20.859110 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:28:20.864183 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:28:20.865455 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Feb 13 15:28:20.865476 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Feb 13 15:28:20.908193 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:28:20.910229 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:28:20.917018 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:28:20.923221 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:28:20.931870 kernel: loop1: detected capacity change from 0 to 138176 Feb 13 15:28:20.960635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:28:20.972908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:28:20.978851 kernel: loop2: detected capacity change from 0 to 147912 Feb 13 15:28:21.038113 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Feb 13 15:28:21.038132 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Feb 13 15:28:21.043471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:28:21.046906 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 15:28:21.059866 kernel: loop4: detected capacity change from 0 to 138176 Feb 13 15:28:21.076864 kernel: loop5: detected capacity change from 0 to 147912 Feb 13 15:28:21.090713 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:28:21.091952 (sd-merge)[1208]: Merged extensions into '/usr'. Feb 13 15:28:21.123142 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:28:21.123162 systemd[1]: Reloading... Feb 13 15:28:21.202200 zram_generator::config[1233]: No configuration found. Feb 13 15:28:21.307091 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:28:21.353660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:28:21.430259 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:28:21.431283 systemd[1]: Reloading finished in 307 ms. Feb 13 15:28:21.462333 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:28:21.464650 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:28:21.492226 systemd[1]: Starting ensure-sysext.service... Feb 13 15:28:21.495632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:28:21.510093 systemd[1]: Reload requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:28:21.510113 systemd[1]: Reloading... Feb 13 15:28:21.567875 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:28:21.568635 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:28:21.569623 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:28:21.570035 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Feb 13 15:28:21.570169 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Feb 13 15:28:21.576021 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:28:21.576136 systemd-tmpfiles[1275]: Skipping /boot Feb 13 15:28:21.604925 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:28:21.605083 systemd-tmpfiles[1275]: Skipping /boot Feb 13 15:28:21.608870 zram_generator::config[1307]: No configuration found. Feb 13 15:28:21.790426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:28:21.865171 systemd[1]: Reloading finished in 354 ms. Feb 13 15:28:21.879123 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:28:21.903086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:28:21.925091 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:28:21.927684 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:28:21.930530 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:28:21.934530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:28:21.938186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:28:21.944609 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:28:21.950741 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:28:21.951349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:28:21.958186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:28:21.962256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:28:21.966664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:28:21.968028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:28:21.968160 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:28:21.968302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:28:21.969818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:28:21.970217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:28:21.975302 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:28:21.975935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:28:21.980928 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:28:21.984167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:28:21.984490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:28:21.986491 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Feb 13 15:28:21.994428 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:28:22.004379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:28:22.005003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:28:22.008527 augenrules[1376]: No rules Feb 13 15:28:22.019292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:28:22.023100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:28:22.025622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:28:22.030073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:28:22.032090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:28:22.032242 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:28:22.034137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:28:22.038917 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:28:22.040194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:28:22.043760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:28:22.052247 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:28:22.052670 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:28:22.057185 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:28:22.060167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:28:22.060562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:28:22.062591 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:28:22.063050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:28:22.064951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:28:22.065446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:28:22.067553 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:28:22.068380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:28:22.077425 systemd[1]: Finished ensure-sysext.service. Feb 13 15:28:22.082671 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:28:22.100664 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:28:22.110192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:28:22.112327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:28:22.112500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:28:22.115915 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:28:22.117902 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:28:22.118187 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:28:22.139889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1405) Feb 13 15:28:22.209154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:28:22.222293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:28:22.277944 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:28:22.291852 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Feb 13 15:28:22.291913 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:28:22.296852 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:28:22.304858 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:28:22.308445 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:28:22.308649 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:28:22.305624 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:28:22.309034 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:28:22.323399 systemd-resolved[1347]: Positive Trust Anchors: Feb 13 15:28:22.323418 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:28:22.323451 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:28:22.327413 systemd-networkd[1415]: lo: Link UP Feb 13 15:28:22.327425 systemd-networkd[1415]: lo: Gained carrier Feb 13 15:28:22.331500 systemd-resolved[1347]: Defaulting to hostname 'linux'. Feb 13 15:28:22.332004 systemd-networkd[1415]: Enumeration completed Feb 13 15:28:22.335851 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:28:22.336440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:28:22.336826 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:28:22.336973 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:28:22.337958 systemd-networkd[1415]: eth0: Link UP Feb 13 15:28:22.337968 systemd-networkd[1415]: eth0: Gained carrier Feb 13 15:28:22.337982 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:28:22.338052 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:28:22.339603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:28:22.341580 systemd[1]: Reached target network.target - Network. Feb 13 15:28:22.342672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:28:22.345976 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:28:22.349258 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:28:22.363792 systemd-networkd[1415]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:28:22.366080 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 15:28:23.016585 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:28:23.016647 systemd-timesyncd[1423]: Initial clock synchronization to Thu 2025-02-13 15:28:23.016482 UTC. Feb 13 15:28:23.017652 systemd-resolved[1347]: Clock change detected. Flushing caches. Feb 13 15:28:23.059118 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:28:23.123295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:28:23.137770 kernel: kvm_amd: TSC scaling supported Feb 13 15:28:23.137826 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:28:23.137841 kernel: kvm_amd: Nested Paging enabled Feb 13 15:28:23.137875 kernel: kvm_amd: LBR virtualization supported Feb 13 15:28:23.138893 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:28:23.138936 kernel: kvm_amd: Virtual GIF supported Feb 13 15:28:23.157472 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:28:23.187179 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:28:23.199608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:28:23.207233 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:28:23.241920 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:28:23.243542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:28:23.244722 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:28:23.246022 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:28:23.247416 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:28:23.249024 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:28:23.250592 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:28:23.251934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:28:23.253239 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:28:23.253286 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:28:23.254296 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:28:23.256480 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:28:23.260125 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:28:23.264323 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:28:23.265899 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:28:23.267227 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:28:23.272740 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:28:23.274597 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:28:23.277433 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:28:23.279178 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:28:23.280471 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:28:23.281508 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:28:23.282573 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:28:23.282607 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:28:23.284173 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:28:23.286428 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:28:23.289460 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:28:23.291573 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:28:23.297034 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:28:23.298254 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:28:23.300745 jq[1457]: false Feb 13 15:28:23.303471 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:28:23.307693 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:28:23.312531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:28:23.316783 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:28:23.323664 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:28:23.326053 dbus-daemon[1456]: [system] SELinux support is enabled Feb 13 15:28:23.326968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:28:23.327763 extend-filesystems[1458]: Found loop3 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found loop4 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found loop5 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found sr0 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found vda Feb 13 15:28:23.327763 extend-filesystems[1458]: Found vda1 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found vda2 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found vda3 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found usr Feb 13 15:28:23.327763 extend-filesystems[1458]: Found vda4 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found vda6 Feb 13 15:28:23.327763 extend-filesystems[1458]: Found vda7 Feb 13 15:28:23.327553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:28:23.337175 extend-filesystems[1458]: Found vda9 Feb 13 15:28:23.337175 extend-filesystems[1458]: Checking size of /dev/vda9 Feb 13 15:28:23.329602 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:28:23.341030 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:28:23.344604 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:28:23.347771 update_engine[1472]: I20250213 15:28:23.347697 1472 main.cc:92] Flatcar Update Engine starting Feb 13 15:28:23.348535 extend-filesystems[1458]: Resized partition /dev/vda9 Feb 13 15:28:23.349487 jq[1476]: true Feb 13 15:28:23.350510 update_engine[1472]: I20250213 15:28:23.350422 1472 update_check_scheduler.cc:74] Next update check in 7m5s Feb 13 15:28:23.351500 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:28:23.355877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:28:23.356155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:28:23.357458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1388) Feb 13 15:28:23.357551 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:28:23.357810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:28:23.362970 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:28:23.364868 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:28:23.363382 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:28:23.378288 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:28:23.378383 jq[1482]: true Feb 13 15:28:23.383761 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:28:23.403435 tar[1481]: linux-amd64/helm Feb 13 15:28:23.416991 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:28:23.453187 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:28:23.453221 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:28:23.454771 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:28:23.454789 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:28:23.462724 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:28:23.468989 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:28:23.473003 systemd-logind[1469]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 15:28:23.473030 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:28:23.476344 systemd-logind[1469]: New seat seat0. Feb 13 15:28:23.478376 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:28:23.493465 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:28:23.514796 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:28:23.524880 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:28:23.524880 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:28:23.524880 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:28:23.529518 extend-filesystems[1458]: Resized filesystem in /dev/vda9 Feb 13 15:28:23.525105 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:28:23.526788 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:28:23.529565 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:28:23.530107 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:28:23.534378 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:28:23.533144 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:28:23.559168 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:28:23.566421 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:28:23.566905 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:28:23.577723 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:28:23.617131 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:28:23.624861 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:28:23.627915 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:28:23.629640 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:28:23.796407 containerd[1484]: time="2025-02-13T15:28:23.795310449Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:28:23.820052 containerd[1484]: time="2025-02-13T15:28:23.820011513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822126 containerd[1484]: time="2025-02-13T15:28:23.822092675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822126 containerd[1484]: time="2025-02-13T15:28:23.822117772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:28:23.822184 containerd[1484]: time="2025-02-13T15:28:23.822133472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:28:23.822366 containerd[1484]: time="2025-02-13T15:28:23.822347824Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:28:23.822397 containerd[1484]: time="2025-02-13T15:28:23.822366859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822478 containerd[1484]: time="2025-02-13T15:28:23.822461747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822513 containerd[1484]: time="2025-02-13T15:28:23.822478990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822762 containerd[1484]: time="2025-02-13T15:28:23.822736352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822762 containerd[1484]: time="2025-02-13T15:28:23.822754747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822809 containerd[1484]: time="2025-02-13T15:28:23.822767471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822809 containerd[1484]: time="2025-02-13T15:28:23.822776758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:28:23.822902 containerd[1484]: time="2025-02-13T15:28:23.822882647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:28:23.823185 containerd[1484]: time="2025-02-13T15:28:23.823162221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:28:23.823347 containerd[1484]: time="2025-02-13T15:28:23.823327180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:28:23.823347 containerd[1484]: time="2025-02-13T15:28:23.823342489Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:28:23.823514 containerd[1484]: time="2025-02-13T15:28:23.823492510Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:28:23.823578 containerd[1484]: time="2025-02-13T15:28:23.823565617Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:28:23.829583 containerd[1484]: time="2025-02-13T15:28:23.829554456Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:28:23.829636 containerd[1484]: time="2025-02-13T15:28:23.829605652Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:28:23.829636 containerd[1484]: time="2025-02-13T15:28:23.829621812Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:28:23.829674 containerd[1484]: time="2025-02-13T15:28:23.829636500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:28:23.829674 containerd[1484]: time="2025-02-13T15:28:23.829649294Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:28:23.829803 containerd[1484]: time="2025-02-13T15:28:23.829787112Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:28:23.830039 containerd[1484]: time="2025-02-13T15:28:23.830019057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:28:23.830159 containerd[1484]: time="2025-02-13T15:28:23.830143641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:28:23.830194 containerd[1484]: time="2025-02-13T15:28:23.830162296Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:28:23.830194 containerd[1484]: time="2025-02-13T15:28:23.830176002Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:28:23.830194 containerd[1484]: time="2025-02-13T15:28:23.830189206Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830259 containerd[1484]: time="2025-02-13T15:28:23.830202341Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830259 containerd[1484]: time="2025-02-13T15:28:23.830214313Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830259 containerd[1484]: time="2025-02-13T15:28:23.830227258Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830259 containerd[1484]: time="2025-02-13T15:28:23.830239851Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830326 containerd[1484]: time="2025-02-13T15:28:23.830263135Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830326 containerd[1484]: time="2025-02-13T15:28:23.830275809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830326 containerd[1484]: time="2025-02-13T15:28:23.830287020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:28:23.830326 containerd[1484]: time="2025-02-13T15:28:23.830310614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.830322697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.975994110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976062599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976085121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976113474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976139493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976165231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976185640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976217219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976233579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976250591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976270449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976299092Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976344497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.976685 containerd[1484]: time="2025-02-13T15:28:23.976385945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.977181 containerd[1484]: time="2025-02-13T15:28:23.976403228Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:28:23.978385 containerd[1484]: time="2025-02-13T15:28:23.978358724Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:28:23.978520 containerd[1484]: time="2025-02-13T15:28:23.978496152Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:28:23.978685 containerd[1484]: time="2025-02-13T15:28:23.978665089Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:28:23.978799 containerd[1484]: time="2025-02-13T15:28:23.978745269Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:28:23.978799 containerd[1484]: time="2025-02-13T15:28:23.978766499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.978799 containerd[1484]: time="2025-02-13T15:28:23.978813717Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:28:23.979021 containerd[1484]: time="2025-02-13T15:28:23.978840467Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:28:23.979021 containerd[1484]: time="2025-02-13T15:28:23.978857630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:28:23.979415 containerd[1484]: time="2025-02-13T15:28:23.979336357Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:28:23.979678 containerd[1484]: time="2025-02-13T15:28:23.979421286Z" level=info msg="Connect containerd service" Feb 13 15:28:23.979678 containerd[1484]: time="2025-02-13T15:28:23.979507518Z" level=info msg="using legacy CRI server" Feb 13 15:28:23.979678 containerd[1484]: time="2025-02-13T15:28:23.979522606Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:28:23.979780 containerd[1484]: time="2025-02-13T15:28:23.979695651Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:28:23.980735 containerd[1484]: time="2025-02-13T15:28:23.980697269Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.980959821Z" level=info msg="Start subscribing containerd event" Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.981087701Z" level=info msg="Start recovering state" Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.981189121Z" level=info msg="Start event monitor" Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.981200693Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.981219418Z" level=info msg="Start snapshots syncer" Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.981234166Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.981247761Z" level=info msg="Start streaming server" Feb 13 15:28:23.981429 containerd[1484]: time="2025-02-13T15:28:23.981278248Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:28:23.981537 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:28:23.982837 containerd[1484]: time="2025-02-13T15:28:23.981971879Z" level=info msg="containerd successfully booted in 0.187993s" Feb 13 15:28:24.130400 tar[1481]: linux-amd64/LICENSE Feb 13 15:28:24.130400 tar[1481]: linux-amd64/README.md Feb 13 15:28:24.149338 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:28:24.374734 systemd-networkd[1415]: eth0: Gained IPv6LL Feb 13 15:28:24.378074 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:28:24.384045 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:28:24.394696 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:28:24.397488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:24.399927 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:28:24.427287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:28:24.429720 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:28:24.430031 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:28:24.433804 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:28:25.492220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:25.494056 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:28:25.495428 systemd[1]: Startup finished in 882ms (kernel) + 7.880s (initrd) + 5.282s (userspace) = 14.044s. Feb 13 15:28:25.518305 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:28:26.123893 kubelet[1570]: E0213 15:28:26.123809 1570 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:28:26.128657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:28:26.128895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:28:26.129288 systemd[1]: kubelet.service: Consumed 1.589s CPU time, 245.1M memory peak. Feb 13 15:28:27.542225 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:28:27.543613 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:54622.service - OpenSSH per-connection server daemon (10.0.0.1:54622). Feb 13 15:28:27.599164 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 54622 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:28:27.601540 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:27.613026 systemd-logind[1469]: New session 1 of user core. Feb 13 15:28:27.614667 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:28:27.623721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:28:27.637494 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:28:27.640376 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:28:27.648323 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:28:27.650848 systemd-logind[1469]: New session c1 of user core. Feb 13 15:28:27.793366 systemd[1589]: Queued start job for default target default.target. Feb 13 15:28:27.803826 systemd[1589]: Created slice app.slice - User Application Slice. Feb 13 15:28:27.803853 systemd[1589]: Reached target paths.target - Paths. Feb 13 15:28:27.803900 systemd[1589]: Reached target timers.target - Timers. Feb 13 15:28:27.805610 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:28:27.817224 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:28:27.817354 systemd[1589]: Reached target sockets.target - Sockets. Feb 13 15:28:27.817402 systemd[1589]: Reached target basic.target - Basic System. Feb 13 15:28:27.817471 systemd[1589]: Reached target default.target - Main User Target. Feb 13 15:28:27.817513 systemd[1589]: Startup finished in 160ms. Feb 13 15:28:27.817849 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:28:27.819562 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:28:27.892107 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:54636.service - OpenSSH per-connection server daemon (10.0.0.1:54636). Feb 13 15:28:27.927182 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 54636 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:28:27.928867 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:27.933579 systemd-logind[1469]: New session 2 of user core. Feb 13 15:28:27.951587 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:28:28.004779 sshd[1602]: Connection closed by 10.0.0.1 port 54636 Feb 13 15:28:28.005117 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:28.021433 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:54636.service: Deactivated successfully. Feb 13 15:28:28.023792 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:28:28.025655 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:28:28.038975 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:54642.service - OpenSSH per-connection server daemon (10.0.0.1:54642). Feb 13 15:28:28.040158 systemd-logind[1469]: Removed session 2. Feb 13 15:28:28.074869 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 54642 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:28:28.076673 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:28.081266 systemd-logind[1469]: New session 3 of user core. Feb 13 15:28:28.090597 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:28:28.141053 sshd[1610]: Connection closed by 10.0.0.1 port 54642 Feb 13 15:28:28.141536 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:28.159262 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:54642.service: Deactivated successfully. Feb 13 15:28:28.161175 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:28:28.162684 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:28:28.164067 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:54656.service - OpenSSH per-connection server daemon (10.0.0.1:54656). Feb 13 15:28:28.164833 systemd-logind[1469]: Removed session 3. Feb 13 15:28:28.207307 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 54656 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:28:28.209074 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:28.213884 systemd-logind[1469]: New session 4 of user core. Feb 13 15:28:28.230799 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:28:28.285393 sshd[1618]: Connection closed by 10.0.0.1 port 54656 Feb 13 15:28:28.285774 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:28.298102 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:54656.service: Deactivated successfully. Feb 13 15:28:28.299995 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:28:28.301544 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:28:28.317740 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:54662.service - OpenSSH per-connection server daemon (10.0.0.1:54662). Feb 13 15:28:28.318762 systemd-logind[1469]: Removed session 4. Feb 13 15:28:28.354684 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 54662 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:28:28.356019 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:28.360274 systemd-logind[1469]: New session 5 of user core. Feb 13 15:28:28.370786 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:28:28.433424 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:28:28.433899 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:28:28.449983 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 15:28:28.452026 sshd[1626]: Connection closed by 10.0.0.1 port 54662 Feb 13 15:28:28.452556 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:28.469368 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:54662.service: Deactivated successfully. Feb 13 15:28:28.471495 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:28:28.473372 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:28:28.474869 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:54674.service - OpenSSH per-connection server daemon (10.0.0.1:54674). Feb 13 15:28:28.475836 systemd-logind[1469]: Removed session 5. Feb 13 15:28:28.515012 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 54674 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:28:28.516578 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:28.521649 systemd-logind[1469]: New session 6 of user core. Feb 13 15:28:28.539788 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:28:28.597373 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:28:28.597835 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:28:28.602405 sudo[1637]: pam_unix(sudo:session): session closed for user root Feb 13 15:28:28.610018 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:28:28.610367 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:28:28.632876 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:28:28.671712 augenrules[1659]: No rules Feb 13 15:28:28.674151 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:28:28.674532 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:28:28.675947 sudo[1636]: pam_unix(sudo:session): session closed for user root Feb 13 15:28:28.677788 sshd[1635]: Connection closed by 10.0.0.1 port 54674 Feb 13 15:28:28.678149 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:28.691273 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:54674.service: Deactivated successfully. Feb 13 15:28:28.693858 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:28:28.695508 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:28:28.709941 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:54678.service - OpenSSH per-connection server daemon (10.0.0.1:54678). Feb 13 15:28:28.711068 systemd-logind[1469]: Removed session 6. Feb 13 15:28:28.752158 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 54678 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:28:28.753941 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:28.758982 systemd-logind[1469]: New session 7 of user core. Feb 13 15:28:28.772729 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:28:28.828275 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:28:28.828637 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:28:29.388686 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:28:29.388866 (dockerd)[1691]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:28:29.982485 dockerd[1691]: time="2025-02-13T15:28:29.982379562Z" level=info msg="Starting up" Feb 13 15:28:30.470362 dockerd[1691]: time="2025-02-13T15:28:30.470191880Z" level=info msg="Loading containers: start." Feb 13 15:28:30.679481 kernel: Initializing XFRM netlink socket Feb 13 15:28:30.768071 systemd-networkd[1415]: docker0: Link UP Feb 13 15:28:30.816926 dockerd[1691]: time="2025-02-13T15:28:30.816869901Z" level=info msg="Loading containers: done." Feb 13 15:28:30.832998 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1934161597-merged.mount: Deactivated successfully. Feb 13 15:28:30.834013 dockerd[1691]: time="2025-02-13T15:28:30.833970735Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:28:30.834106 dockerd[1691]: time="2025-02-13T15:28:30.834089788Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:28:30.834246 dockerd[1691]: time="2025-02-13T15:28:30.834227446Z" level=info msg="Daemon has completed initialization" Feb 13 15:28:30.871302 dockerd[1691]: time="2025-02-13T15:28:30.871198687Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:28:30.871411 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:28:31.614846 containerd[1484]: time="2025-02-13T15:28:31.614789710Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:28:33.321102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905745073.mount: Deactivated successfully. Feb 13 15:28:34.733992 containerd[1484]: time="2025-02-13T15:28:34.733922684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:34.734961 containerd[1484]: time="2025-02-13T15:28:34.734866353Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 15:28:34.735802 containerd[1484]: time="2025-02-13T15:28:34.735771370Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:34.738729 containerd[1484]: time="2025-02-13T15:28:34.738673421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:34.739589 containerd[1484]: time="2025-02-13T15:28:34.739557709Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.12471553s" Feb 13 15:28:34.739651 containerd[1484]: time="2025-02-13T15:28:34.739591974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:28:34.763289 containerd[1484]: time="2025-02-13T15:28:34.763235475Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:28:36.379306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:28:36.388609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:36.571931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:36.577350 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:28:37.246253 kubelet[1968]: E0213 15:28:37.246181 1968 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:28:37.253865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:28:37.254077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:28:37.254504 systemd[1]: kubelet.service: Consumed 242ms CPU time, 97.1M memory peak. Feb 13 15:28:37.688269 containerd[1484]: time="2025-02-13T15:28:37.688098503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:37.689305 containerd[1484]: time="2025-02-13T15:28:37.689259159Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 15:28:37.691234 containerd[1484]: time="2025-02-13T15:28:37.691200689Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:37.694951 containerd[1484]: time="2025-02-13T15:28:37.694882984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:37.695996 containerd[1484]: time="2025-02-13T15:28:37.695967387Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.932692538s" Feb 13 15:28:37.696042 containerd[1484]: time="2025-02-13T15:28:37.695998836Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:28:37.719485 containerd[1484]: time="2025-02-13T15:28:37.719418818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:28:38.799280 containerd[1484]: time="2025-02-13T15:28:38.799187675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:38.799981 containerd[1484]: time="2025-02-13T15:28:38.799880524Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 15:28:38.801144 containerd[1484]: time="2025-02-13T15:28:38.801108336Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:38.804033 containerd[1484]: time="2025-02-13T15:28:38.804006500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:38.805057 containerd[1484]: time="2025-02-13T15:28:38.805014830Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.085535459s" Feb 13 15:28:38.805057 containerd[1484]: time="2025-02-13T15:28:38.805056989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:28:38.830584 containerd[1484]: time="2025-02-13T15:28:38.830540982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:28:40.097593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831825810.mount: Deactivated successfully. Feb 13 15:28:40.901914 containerd[1484]: time="2025-02-13T15:28:40.901838604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:40.903198 containerd[1484]: time="2025-02-13T15:28:40.903139173Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:28:40.912004 containerd[1484]: time="2025-02-13T15:28:40.911951145Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:40.915429 containerd[1484]: time="2025-02-13T15:28:40.915366199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:40.916020 containerd[1484]: time="2025-02-13T15:28:40.915979068Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.085406978s" Feb 13 15:28:40.916020 containerd[1484]: time="2025-02-13T15:28:40.916015436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:28:40.941149 containerd[1484]: time="2025-02-13T15:28:40.941096613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:28:41.469069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608481452.mount: Deactivated successfully. Feb 13 15:28:42.313249 containerd[1484]: time="2025-02-13T15:28:42.313159441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:42.314044 containerd[1484]: time="2025-02-13T15:28:42.313938231Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:28:42.315223 containerd[1484]: time="2025-02-13T15:28:42.315168387Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:42.321615 containerd[1484]: time="2025-02-13T15:28:42.321544833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:42.323077 containerd[1484]: time="2025-02-13T15:28:42.323046018Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.381906675s" Feb 13 15:28:42.323137 containerd[1484]: time="2025-02-13T15:28:42.323079140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:28:42.350327 containerd[1484]: time="2025-02-13T15:28:42.350268540Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:28:42.810678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199235302.mount: Deactivated successfully. Feb 13 15:28:42.817255 containerd[1484]: time="2025-02-13T15:28:42.817192733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:42.818032 containerd[1484]: time="2025-02-13T15:28:42.817978446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:28:42.819222 containerd[1484]: time="2025-02-13T15:28:42.819181782Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:42.821652 containerd[1484]: time="2025-02-13T15:28:42.821617028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:42.822430 containerd[1484]: time="2025-02-13T15:28:42.822385459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 472.075391ms" Feb 13 15:28:42.822506 containerd[1484]: time="2025-02-13T15:28:42.822432367Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:28:42.845601 containerd[1484]: time="2025-02-13T15:28:42.845555663Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:28:43.456731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982165230.mount: Deactivated successfully. Feb 13 15:28:45.575414 containerd[1484]: time="2025-02-13T15:28:45.575338465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:45.576164 containerd[1484]: time="2025-02-13T15:28:45.576108349Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 15:28:45.577273 containerd[1484]: time="2025-02-13T15:28:45.577232537Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:45.580123 containerd[1484]: time="2025-02-13T15:28:45.580089914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:45.582207 containerd[1484]: time="2025-02-13T15:28:45.582165396Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.736567463s" Feb 13 15:28:45.582207 containerd[1484]: time="2025-02-13T15:28:45.582196775Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:28:47.504511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:28:47.517630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:47.688672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:47.693306 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:28:47.729283 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:47.733278 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:28:47.733591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:47.733809 systemd[1]: kubelet.service: Consumed 207ms CPU time, 94.5M memory peak. Feb 13 15:28:47.746655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:47.766917 systemd[1]: Reload requested from client PID 2216 ('systemctl') (unit session-7.scope)... Feb 13 15:28:47.766938 systemd[1]: Reloading... Feb 13 15:28:47.872471 zram_generator::config[2266]: No configuration found. Feb 13 15:28:48.496402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:28:48.601268 systemd[1]: Reloading finished in 833 ms. Feb 13 15:28:48.657722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:48.660891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:48.662584 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:28:48.662897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:48.662949 systemd[1]: kubelet.service: Consumed 143ms CPU time, 83.5M memory peak. Feb 13 15:28:48.664844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:48.812828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:48.817286 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:28:48.868645 kubelet[2310]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:48.868645 kubelet[2310]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:28:48.868645 kubelet[2310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:48.869164 kubelet[2310]: I0213 15:28:48.868701 2310 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:28:49.090543 kubelet[2310]: I0213 15:28:49.090376 2310 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:28:49.090543 kubelet[2310]: I0213 15:28:49.090409 2310 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:28:49.090705 kubelet[2310]: I0213 15:28:49.090630 2310 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:28:49.104742 kubelet[2310]: I0213 15:28:49.104674 2310 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:28:49.106029 kubelet[2310]: E0213 15:28:49.105989 2310 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.116188 kubelet[2310]: I0213 15:28:49.116159 2310 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:28:49.116903 kubelet[2310]: I0213 15:28:49.116852 2310 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:28:49.117078 kubelet[2310]: I0213 15:28:49.116889 2310 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:28:49.117185 kubelet[2310]: I0213 15:28:49.117096 2310 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:28:49.117185 kubelet[2310]: I0213 15:28:49.117107 2310 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:28:49.117283 kubelet[2310]: I0213 15:28:49.117263 2310 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:49.118065 kubelet[2310]: I0213 15:28:49.118038 2310 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:28:49.118065 kubelet[2310]: I0213 15:28:49.118055 2310 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:28:49.118135 kubelet[2310]: I0213 15:28:49.118095 2310 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:28:49.118172 kubelet[2310]: I0213 15:28:49.118136 2310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:28:49.121929 kubelet[2310]: W0213 15:28:49.121799 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.121929 kubelet[2310]: E0213 15:28:49.121885 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.123709 kubelet[2310]: W0213 15:28:49.122179 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.123709 kubelet[2310]: E0213 15:28:49.122236 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.124028 kubelet[2310]: I0213 15:28:49.123762 2310 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:28:49.125522 kubelet[2310]: I0213 15:28:49.125500 2310 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:28:49.125644 kubelet[2310]: W0213 15:28:49.125627 2310 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:28:49.242464 kubelet[2310]: I0213 15:28:49.242387 2310 server.go:1264] "Started kubelet" Feb 13 15:28:49.242623 kubelet[2310]: I0213 15:28:49.242525 2310 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:28:49.242623 kubelet[2310]: I0213 15:28:49.242539 2310 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:28:49.243372 kubelet[2310]: I0213 15:28:49.243099 2310 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:28:49.244997 kubelet[2310]: I0213 15:28:49.244973 2310 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:28:49.245081 kubelet[2310]: I0213 15:28:49.245065 2310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:28:49.247415 kubelet[2310]: I0213 15:28:49.247378 2310 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:28:49.248142 kubelet[2310]: I0213 15:28:49.247826 2310 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:28:49.248142 kubelet[2310]: I0213 15:28:49.247890 2310 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:28:49.248142 kubelet[2310]: E0213 15:28:49.247997 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Feb 13 15:28:49.248290 kubelet[2310]: W0213 15:28:49.248146 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.248290 kubelet[2310]: E0213 15:28:49.248184 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.250501 kubelet[2310]: I0213 15:28:49.249805 2310 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:28:49.250501 kubelet[2310]: I0213 15:28:49.249962 2310 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:28:49.251106 kubelet[2310]: I0213 15:28:49.251087 2310 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:28:49.251620 kubelet[2310]: E0213 15:28:49.251419 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce22a24d8294 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:28:49.242342036 +0000 UTC m=+0.420770886,LastTimestamp:2025-02-13 15:28:49.242342036 +0000 UTC m=+0.420770886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:28:49.251732 kubelet[2310]: E0213 15:28:49.251711 2310 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:28:49.264753 kubelet[2310]: I0213 15:28:49.264714 2310 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:28:49.264753 kubelet[2310]: I0213 15:28:49.264738 2310 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:28:49.264753 kubelet[2310]: I0213 15:28:49.264760 2310 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:49.269054 kubelet[2310]: I0213 15:28:49.269017 2310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:28:49.270787 kubelet[2310]: I0213 15:28:49.270755 2310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:28:49.270908 kubelet[2310]: I0213 15:28:49.270815 2310 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:28:49.270908 kubelet[2310]: I0213 15:28:49.270838 2310 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:28:49.270908 kubelet[2310]: E0213 15:28:49.270882 2310 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:28:49.271638 kubelet[2310]: W0213 15:28:49.271581 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.271688 kubelet[2310]: E0213 15:28:49.271651 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:49.348662 kubelet[2310]: I0213 15:28:49.348565 2310 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:28:49.348950 kubelet[2310]: E0213 15:28:49.348907 2310 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Feb 13 15:28:49.371300 kubelet[2310]: E0213 15:28:49.371249 2310 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:28:49.448762 kubelet[2310]: E0213 15:28:49.448728 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Feb 13 15:28:49.527221 kubelet[2310]: E0213 15:28:49.527121 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce22a24d8294 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:28:49.242342036 +0000 UTC m=+0.420770886,LastTimestamp:2025-02-13 15:28:49.242342036 +0000 UTC m=+0.420770886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:28:49.550359 kubelet[2310]: I0213 15:28:49.550332 2310 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:28:49.553928 kubelet[2310]: E0213 15:28:49.553870 2310 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Feb 13 15:28:49.565884 kubelet[2310]: I0213 15:28:49.565845 2310 policy_none.go:49] "None policy: Start" Feb 13 15:28:49.566649 kubelet[2310]: I0213 15:28:49.566602 2310 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:28:49.566649 kubelet[2310]: I0213 15:28:49.566642 2310 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:28:49.572974 kubelet[2310]: E0213 15:28:49.572939 2310 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:28:49.576388 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:28:49.594972 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:28:49.598371 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:28:49.609820 kubelet[2310]: I0213 15:28:49.609386 2310 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:28:49.609820 kubelet[2310]: I0213 15:28:49.609660 2310 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:28:49.609820 kubelet[2310]: I0213 15:28:49.609821 2310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:28:49.611331 kubelet[2310]: E0213 15:28:49.611305 2310 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:28:49.849093 kubelet[2310]: E0213 15:28:49.849054 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Feb 13 15:28:49.956309 kubelet[2310]: I0213 15:28:49.956140 2310 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:28:49.956822 kubelet[2310]: E0213 15:28:49.956566 2310 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Feb 13 15:28:49.973774 kubelet[2310]: I0213 15:28:49.973708 2310 topology_manager.go:215] "Topology Admit Handler" podUID="0fee819fe829e9f403dda7749e8bf75c" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:28:49.975275 kubelet[2310]: I0213 15:28:49.975220 2310 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:28:49.976293 kubelet[2310]: I0213 15:28:49.976264 2310 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:28:49.982085 systemd[1]: Created slice kubepods-burstable-pod0fee819fe829e9f403dda7749e8bf75c.slice - libcontainer container kubepods-burstable-pod0fee819fe829e9f403dda7749e8bf75c.slice. Feb 13 15:28:50.014392 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:28:50.034017 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:28:50.052134 kubelet[2310]: I0213 15:28:50.052093 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fee819fe829e9f403dda7749e8bf75c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fee819fe829e9f403dda7749e8bf75c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:50.052134 kubelet[2310]: I0213 15:28:50.052126 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fee819fe829e9f403dda7749e8bf75c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fee819fe829e9f403dda7749e8bf75c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:50.052262 kubelet[2310]: I0213 15:28:50.052151 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fee819fe829e9f403dda7749e8bf75c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0fee819fe829e9f403dda7749e8bf75c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:50.052262 kubelet[2310]: I0213 15:28:50.052175 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:50.052262 kubelet[2310]: I0213 15:28:50.052199 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:28:50.052262 kubelet[2310]: I0213 15:28:50.052216 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:50.052262 kubelet[2310]: I0213 15:28:50.052231 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:50.052405 kubelet[2310]: I0213 15:28:50.052246 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:50.052405 kubelet[2310]: I0213 15:28:50.052288 2310 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:50.059701 kubelet[2310]: W0213 15:28:50.059648 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.059701 kubelet[2310]: E0213 15:28:50.059698 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.166879 kubelet[2310]: W0213 15:28:50.166804 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.166879 kubelet[2310]: E0213 15:28:50.166866 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.291978 kubelet[2310]: W0213 15:28:50.291853 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.291978 kubelet[2310]: E0213 15:28:50.291909 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.312215 kubelet[2310]: E0213 15:28:50.312175 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:50.312780 containerd[1484]: time="2025-02-13T15:28:50.312735754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0fee819fe829e9f403dda7749e8bf75c,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:50.331933 kubelet[2310]: E0213 15:28:50.331905 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:50.332286 containerd[1484]: time="2025-02-13T15:28:50.332240716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:50.338501 kubelet[2310]: E0213 15:28:50.338469 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:50.338777 containerd[1484]: time="2025-02-13T15:28:50.338748678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:50.361706 kubelet[2310]: W0213 15:28:50.361643 2310 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.361706 kubelet[2310]: E0213 15:28:50.361704 2310 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:50.649713 kubelet[2310]: E0213 15:28:50.649591 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Feb 13 15:28:50.758142 kubelet[2310]: I0213 15:28:50.758104 2310 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:28:50.758482 kubelet[2310]: E0213 15:28:50.758457 2310 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Feb 13 15:28:50.833324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248165767.mount: Deactivated successfully. Feb 13 15:28:50.837752 containerd[1484]: time="2025-02-13T15:28:50.837701174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:50.840391 containerd[1484]: time="2025-02-13T15:28:50.840341906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:28:50.841351 containerd[1484]: time="2025-02-13T15:28:50.841319999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:50.843178 containerd[1484]: time="2025-02-13T15:28:50.843141324Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:50.843922 containerd[1484]: time="2025-02-13T15:28:50.843868358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:28:50.844790 containerd[1484]: time="2025-02-13T15:28:50.844739441Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:50.845555 containerd[1484]: time="2025-02-13T15:28:50.845514414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:28:50.846396 containerd[1484]: time="2025-02-13T15:28:50.846363546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:50.848300 containerd[1484]: time="2025-02-13T15:28:50.848256485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.907627ms" Feb 13 15:28:50.848992 containerd[1484]: time="2025-02-13T15:28:50.848958952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.107592ms" Feb 13 15:28:50.853486 containerd[1484]: time="2025-02-13T15:28:50.853425016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.615644ms" Feb 13 15:28:51.024262 containerd[1484]: time="2025-02-13T15:28:51.023879128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:51.024262 containerd[1484]: time="2025-02-13T15:28:51.023940473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:51.024262 containerd[1484]: time="2025-02-13T15:28:51.023953868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:51.024262 containerd[1484]: time="2025-02-13T15:28:51.022702441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:51.024262 containerd[1484]: time="2025-02-13T15:28:51.024099681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:51.024262 containerd[1484]: time="2025-02-13T15:28:51.024118226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:51.024706 containerd[1484]: time="2025-02-13T15:28:51.024208826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:51.026693 containerd[1484]: time="2025-02-13T15:28:51.025700823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:51.026693 containerd[1484]: time="2025-02-13T15:28:51.025759874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:51.026693 containerd[1484]: time="2025-02-13T15:28:51.025773560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:51.026693 containerd[1484]: time="2025-02-13T15:28:51.025844613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:51.026693 containerd[1484]: time="2025-02-13T15:28:51.025382186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:51.056657 systemd[1]: Started cri-containerd-12b36e1f47f315d0a2e33171888cd0004e0912562892d5026e22653f9c680aaf.scope - libcontainer container 12b36e1f47f315d0a2e33171888cd0004e0912562892d5026e22653f9c680aaf. Feb 13 15:28:51.058606 systemd[1]: Started cri-containerd-3b8e8eb74c59c8ef7bdcfe1707ddb1652d392e77a82b263dd6473a78378d23f1.scope - libcontainer container 3b8e8eb74c59c8ef7bdcfe1707ddb1652d392e77a82b263dd6473a78378d23f1. Feb 13 15:28:51.062921 systemd[1]: Started cri-containerd-ebfeca73930e046ff7ed6759554a5dd4250f65227c8fda4d3d899535b53b17f8.scope - libcontainer container ebfeca73930e046ff7ed6759554a5dd4250f65227c8fda4d3d899535b53b17f8. Feb 13 15:28:51.126170 kubelet[2310]: E0213 15:28:51.126074 2310 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.34:6443: connect: connection refused Feb 13 15:28:51.210888 containerd[1484]: time="2025-02-13T15:28:51.210840570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0fee819fe829e9f403dda7749e8bf75c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebfeca73930e046ff7ed6759554a5dd4250f65227c8fda4d3d899535b53b17f8\"" Feb 13 15:28:51.212545 containerd[1484]: time="2025-02-13T15:28:51.212341404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b8e8eb74c59c8ef7bdcfe1707ddb1652d392e77a82b263dd6473a78378d23f1\"" Feb 13 15:28:51.213746 kubelet[2310]: E0213 15:28:51.213714 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:51.214079 kubelet[2310]: E0213 15:28:51.213928 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:51.216334 containerd[1484]: time="2025-02-13T15:28:51.216302752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"12b36e1f47f315d0a2e33171888cd0004e0912562892d5026e22653f9c680aaf\"" Feb 13 15:28:51.216892 kubelet[2310]: E0213 15:28:51.216862 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:51.217346 containerd[1484]: time="2025-02-13T15:28:51.217306584Z" level=info msg="CreateContainer within sandbox \"ebfeca73930e046ff7ed6759554a5dd4250f65227c8fda4d3d899535b53b17f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:28:51.217499 containerd[1484]: time="2025-02-13T15:28:51.217470331Z" level=info msg="CreateContainer within sandbox \"3b8e8eb74c59c8ef7bdcfe1707ddb1652d392e77a82b263dd6473a78378d23f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:28:51.219080 containerd[1484]: time="2025-02-13T15:28:51.219043791Z" level=info msg="CreateContainer within sandbox \"12b36e1f47f315d0a2e33171888cd0004e0912562892d5026e22653f9c680aaf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:28:51.787148 containerd[1484]: time="2025-02-13T15:28:51.787071313Z" level=info msg="CreateContainer within sandbox \"ebfeca73930e046ff7ed6759554a5dd4250f65227c8fda4d3d899535b53b17f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ec7b68ebcfc8b841340e514baa11f451c3385dd66343931cbb13768ffc54073\"" Feb 13 15:28:51.787988 containerd[1484]: time="2025-02-13T15:28:51.787937547Z" level=info msg="StartContainer for \"8ec7b68ebcfc8b841340e514baa11f451c3385dd66343931cbb13768ffc54073\"" Feb 13 15:28:51.793285 containerd[1484]: time="2025-02-13T15:28:51.793229349Z" level=info msg="CreateContainer within sandbox \"12b36e1f47f315d0a2e33171888cd0004e0912562892d5026e22653f9c680aaf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ae88217cb7adc22a228d0b9e2bbc6fe1eacd07698cff70019c885694f6de098\"" Feb 13 15:28:51.793757 containerd[1484]: time="2025-02-13T15:28:51.793734106Z" level=info msg="StartContainer for \"3ae88217cb7adc22a228d0b9e2bbc6fe1eacd07698cff70019c885694f6de098\"" Feb 13 15:28:51.795329 containerd[1484]: time="2025-02-13T15:28:51.795303839Z" level=info msg="CreateContainer within sandbox \"3b8e8eb74c59c8ef7bdcfe1707ddb1652d392e77a82b263dd6473a78378d23f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f445b010f19cae9de10364716845250ca9949233c7d8ed1dd7f5bd43a3ae77e8\"" Feb 13 15:28:51.795799 containerd[1484]: time="2025-02-13T15:28:51.795762930Z" level=info msg="StartContainer for \"f445b010f19cae9de10364716845250ca9949233c7d8ed1dd7f5bd43a3ae77e8\"" Feb 13 15:28:51.820658 systemd[1]: Started cri-containerd-8ec7b68ebcfc8b841340e514baa11f451c3385dd66343931cbb13768ffc54073.scope - libcontainer container 8ec7b68ebcfc8b841340e514baa11f451c3385dd66343931cbb13768ffc54073. Feb 13 15:28:51.828223 systemd[1]: Started cri-containerd-f445b010f19cae9de10364716845250ca9949233c7d8ed1dd7f5bd43a3ae77e8.scope - libcontainer container f445b010f19cae9de10364716845250ca9949233c7d8ed1dd7f5bd43a3ae77e8. Feb 13 15:28:51.851576 systemd[1]: Started cri-containerd-3ae88217cb7adc22a228d0b9e2bbc6fe1eacd07698cff70019c885694f6de098.scope - libcontainer container 3ae88217cb7adc22a228d0b9e2bbc6fe1eacd07698cff70019c885694f6de098. Feb 13 15:28:51.886816 containerd[1484]: time="2025-02-13T15:28:51.886762975Z" level=info msg="StartContainer for \"8ec7b68ebcfc8b841340e514baa11f451c3385dd66343931cbb13768ffc54073\" returns successfully" Feb 13 15:28:51.897887 containerd[1484]: time="2025-02-13T15:28:51.897825687Z" level=info msg="StartContainer for \"f445b010f19cae9de10364716845250ca9949233c7d8ed1dd7f5bd43a3ae77e8\" returns successfully" Feb 13 15:28:51.907634 containerd[1484]: time="2025-02-13T15:28:51.907571350Z" level=info msg="StartContainer for \"3ae88217cb7adc22a228d0b9e2bbc6fe1eacd07698cff70019c885694f6de098\" returns successfully" Feb 13 15:28:52.290573 kubelet[2310]: E0213 15:28:52.290538 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:52.293228 kubelet[2310]: E0213 15:28:52.293184 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:52.299148 kubelet[2310]: E0213 15:28:52.299103 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:52.360315 kubelet[2310]: I0213 15:28:52.360248 2310 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:28:53.310179 kubelet[2310]: E0213 15:28:53.310077 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:53.311681 kubelet[2310]: E0213 15:28:53.311071 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:53.312743 kubelet[2310]: E0213 15:28:53.312631 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:53.324534 kubelet[2310]: E0213 15:28:53.324491 2310 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:28:53.521141 kubelet[2310]: I0213 15:28:53.521051 2310 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:28:54.125762 kubelet[2310]: I0213 15:28:54.125714 2310 apiserver.go:52] "Watching apiserver" Feb 13 15:28:54.148116 kubelet[2310]: I0213 15:28:54.148059 2310 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:28:54.322037 kubelet[2310]: E0213 15:28:54.321995 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:55.253007 systemd[1]: Reload requested from client PID 2591 ('systemctl') (unit session-7.scope)... Feb 13 15:28:55.253051 systemd[1]: Reloading... Feb 13 15:28:55.307077 kubelet[2310]: E0213 15:28:55.307046 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:55.335485 zram_generator::config[2638]: No configuration found. Feb 13 15:28:55.457503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:28:55.578197 systemd[1]: Reloading finished in 324 ms. Feb 13 15:28:55.601409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:55.615260 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:28:55.615565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:55.615610 systemd[1]: kubelet.service: Consumed 958ms CPU time, 118.8M memory peak. Feb 13 15:28:55.623793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:55.784170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:55.788526 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:28:55.833956 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:55.833956 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:28:55.833956 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:55.833956 kubelet[2680]: I0213 15:28:55.833909 2680 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:28:55.838834 kubelet[2680]: I0213 15:28:55.838811 2680 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:28:55.838834 kubelet[2680]: I0213 15:28:55.838831 2680 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:28:55.838986 kubelet[2680]: I0213 15:28:55.838974 2680 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:28:55.840156 kubelet[2680]: I0213 15:28:55.840134 2680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:28:55.841518 kubelet[2680]: I0213 15:28:55.841281 2680 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:28:55.850752 kubelet[2680]: I0213 15:28:55.850722 2680 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:28:55.850966 kubelet[2680]: I0213 15:28:55.850932 2680 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:28:55.851134 kubelet[2680]: I0213 15:28:55.850962 2680 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:28:55.851222 kubelet[2680]: I0213 15:28:55.851146 2680 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:28:55.851222 kubelet[2680]: I0213 15:28:55.851155 2680 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:28:55.851222 kubelet[2680]: I0213 15:28:55.851204 2680 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:55.851323 kubelet[2680]: I0213 15:28:55.851306 2680 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:28:55.851323 kubelet[2680]: I0213 15:28:55.851321 2680 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:28:55.851370 kubelet[2680]: I0213 15:28:55.851342 2680 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:28:55.851370 kubelet[2680]: I0213 15:28:55.851357 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:28:55.853692 kubelet[2680]: I0213 15:28:55.851848 2680 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:28:55.853692 kubelet[2680]: I0213 15:28:55.852016 2680 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:28:55.853692 kubelet[2680]: I0213 15:28:55.852650 2680 server.go:1264] "Started kubelet" Feb 13 15:28:55.854351 kubelet[2680]: I0213 15:28:55.854331 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:28:55.857353 kubelet[2680]: I0213 15:28:55.856726 2680 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:28:55.857646 kubelet[2680]: I0213 15:28:55.857624 2680 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:28:55.858291 kubelet[2680]: I0213 15:28:55.858264 2680 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:28:55.858941 kubelet[2680]: I0213 15:28:55.858927 2680 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:28:55.859302 kubelet[2680]: I0213 15:28:55.859266 2680 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:28:55.862461 kubelet[2680]: I0213 15:28:55.861779 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:28:55.862461 kubelet[2680]: I0213 15:28:55.862048 2680 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:28:55.864726 kubelet[2680]: I0213 15:28:55.864691 2680 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:28:55.865793 kubelet[2680]: I0213 15:28:55.864817 2680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:28:55.865901 kubelet[2680]: E0213 15:28:55.865869 2680 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:28:55.867756 kubelet[2680]: I0213 15:28:55.867734 2680 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:28:55.870996 kubelet[2680]: I0213 15:28:55.870950 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:28:55.872285 kubelet[2680]: I0213 15:28:55.872250 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:28:55.872285 kubelet[2680]: I0213 15:28:55.872286 2680 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:28:55.872372 kubelet[2680]: I0213 15:28:55.872306 2680 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:28:55.872402 kubelet[2680]: E0213 15:28:55.872365 2680 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:28:55.901551 kubelet[2680]: I0213 15:28:55.901512 2680 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:28:55.901551 kubelet[2680]: I0213 15:28:55.901539 2680 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:28:55.901551 kubelet[2680]: I0213 15:28:55.901565 2680 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:55.901771 kubelet[2680]: I0213 15:28:55.901731 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:28:55.901771 kubelet[2680]: I0213 15:28:55.901742 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:28:55.901771 kubelet[2680]: I0213 15:28:55.901762 2680 policy_none.go:49] "None policy: Start" Feb 13 15:28:55.902292 kubelet[2680]: I0213 15:28:55.902264 2680 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:28:55.902344 kubelet[2680]: I0213 15:28:55.902299 2680 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:28:55.902533 kubelet[2680]: I0213 15:28:55.902517 2680 state_mem.go:75] "Updated machine memory state" Feb 13 15:28:55.909560 kubelet[2680]: I0213 15:28:55.909518 2680 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:28:55.909772 kubelet[2680]: I0213 15:28:55.909728 2680 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:28:55.909926 kubelet[2680]: I0213 15:28:55.909880 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:28:55.963690 kubelet[2680]: I0213 15:28:55.963639 2680 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:28:55.970766 kubelet[2680]: I0213 15:28:55.970722 2680 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:28:55.970924 kubelet[2680]: I0213 15:28:55.970817 2680 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:28:55.972504 kubelet[2680]: I0213 15:28:55.972458 2680 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:28:55.972665 kubelet[2680]: I0213 15:28:55.972548 2680 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:28:55.972665 kubelet[2680]: I0213 15:28:55.972606 2680 topology_manager.go:215] "Topology Admit Handler" podUID="0fee819fe829e9f403dda7749e8bf75c" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:28:55.978122 kubelet[2680]: E0213 15:28:55.977990 2680 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:56.161252 kubelet[2680]: I0213 15:28:56.161093 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:56.161252 kubelet[2680]: I0213 15:28:56.161143 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:56.161252 kubelet[2680]: I0213 15:28:56.161164 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:28:56.161252 kubelet[2680]: I0213 15:28:56.161186 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fee819fe829e9f403dda7749e8bf75c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0fee819fe829e9f403dda7749e8bf75c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:56.161252 kubelet[2680]: I0213 15:28:56.161209 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:56.161524 kubelet[2680]: I0213 15:28:56.161228 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:56.161524 kubelet[2680]: I0213 15:28:56.161249 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:28:56.161524 kubelet[2680]: I0213 15:28:56.161278 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fee819fe829e9f403dda7749e8bf75c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fee819fe829e9f403dda7749e8bf75c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:56.161524 kubelet[2680]: I0213 15:28:56.161331 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fee819fe829e9f403dda7749e8bf75c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fee819fe829e9f403dda7749e8bf75c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:56.252687 sudo[2715]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:28:56.253176 sudo[2715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:28:56.278046 kubelet[2680]: E0213 15:28:56.277982 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:56.278464 kubelet[2680]: E0213 15:28:56.278215 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:56.278649 kubelet[2680]: E0213 15:28:56.278619 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:56.729230 sudo[2715]: pam_unix(sudo:session): session closed for user root Feb 13 15:28:56.857064 kubelet[2680]: I0213 15:28:56.856993 2680 apiserver.go:52] "Watching apiserver" Feb 13 15:28:56.859739 kubelet[2680]: I0213 15:28:56.859716 2680 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:28:56.885805 kubelet[2680]: E0213 15:28:56.885024 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:56.885805 kubelet[2680]: E0213 15:28:56.885647 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:56.894799 kubelet[2680]: E0213 15:28:56.894748 2680 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:28:56.895517 kubelet[2680]: E0213 15:28:56.895204 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:56.912009 kubelet[2680]: I0213 15:28:56.911791 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.911763629 podStartE2EDuration="2.911763629s" podCreationTimestamp="2025-02-13 15:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:56.904603318 +0000 UTC m=+1.111548107" watchObservedRunningTime="2025-02-13 15:28:56.911763629 +0000 UTC m=+1.118708418" Feb 13 15:28:56.940853 kubelet[2680]: I0213 15:28:56.940773 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.940747393 podStartE2EDuration="1.940747393s" podCreationTimestamp="2025-02-13 15:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:56.912070401 +0000 UTC m=+1.119015200" watchObservedRunningTime="2025-02-13 15:28:56.940747393 +0000 UTC m=+1.147692182" Feb 13 15:28:56.941075 kubelet[2680]: I0213 15:28:56.940861 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.940857866 podStartE2EDuration="1.940857866s" podCreationTimestamp="2025-02-13 15:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:56.940534873 +0000 UTC m=+1.147479662" watchObservedRunningTime="2025-02-13 15:28:56.940857866 +0000 UTC m=+1.147802655" Feb 13 15:28:57.886708 kubelet[2680]: E0213 15:28:57.886663 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:58.002154 sudo[1672]: pam_unix(sudo:session): session closed for user root Feb 13 15:28:58.003908 sshd[1671]: Connection closed by 10.0.0.1 port 54678 Feb 13 15:28:58.004436 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:58.008834 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:54678.service: Deactivated successfully. Feb 13 15:28:58.011106 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:28:58.011343 systemd[1]: session-7.scope: Consumed 4.656s CPU time, 280.4M memory peak. Feb 13 15:28:58.012592 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:28:58.013512 systemd-logind[1469]: Removed session 7. Feb 13 15:28:59.397720 kubelet[2680]: E0213 15:28:59.397633 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:03.287287 kubelet[2680]: E0213 15:29:03.287252 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:03.894565 kubelet[2680]: E0213 15:29:03.894524 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:04.096006 kubelet[2680]: E0213 15:29:04.095958 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:04.895929 kubelet[2680]: E0213 15:29:04.895888 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:08.455436 update_engine[1472]: I20250213 15:29:08.455312 1472 update_attempter.cc:509] Updating boot flags... Feb 13 15:29:08.541547 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2766) Feb 13 15:29:08.587474 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2768) Feb 13 15:29:08.626476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2768) Feb 13 15:29:09.403091 kubelet[2680]: E0213 15:29:09.403046 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:09.708415 kubelet[2680]: I0213 15:29:09.708267 2680 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:29:09.711362 containerd[1484]: time="2025-02-13T15:29:09.709096285Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:29:09.711802 kubelet[2680]: I0213 15:29:09.709323 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:29:09.746525 kubelet[2680]: I0213 15:29:09.746425 2680 topology_manager.go:215] "Topology Admit Handler" podUID="273cf29d-365f-426a-bc7b-18ab01aedc4a" podNamespace="kube-system" podName="cilium-operator-599987898-tm49d" Feb 13 15:29:09.755865 systemd[1]: Created slice kubepods-besteffort-pod273cf29d_365f_426a_bc7b_18ab01aedc4a.slice - libcontainer container kubepods-besteffort-pod273cf29d_365f_426a_bc7b_18ab01aedc4a.slice. Feb 13 15:29:09.845705 kubelet[2680]: I0213 15:29:09.845646 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/273cf29d-365f-426a-bc7b-18ab01aedc4a-cilium-config-path\") pod \"cilium-operator-599987898-tm49d\" (UID: \"273cf29d-365f-426a-bc7b-18ab01aedc4a\") " pod="kube-system/cilium-operator-599987898-tm49d" Feb 13 15:29:09.845705 kubelet[2680]: I0213 15:29:09.845708 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mzs\" (UniqueName: \"kubernetes.io/projected/273cf29d-365f-426a-bc7b-18ab01aedc4a-kube-api-access-w5mzs\") pod \"cilium-operator-599987898-tm49d\" (UID: \"273cf29d-365f-426a-bc7b-18ab01aedc4a\") " pod="kube-system/cilium-operator-599987898-tm49d" Feb 13 15:29:10.069075 kubelet[2680]: E0213 15:29:10.068926 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:10.069790 containerd[1484]: time="2025-02-13T15:29:10.069743186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tm49d,Uid:273cf29d-365f-426a-bc7b-18ab01aedc4a,Namespace:kube-system,Attempt:0,}" Feb 13 15:29:10.098285 containerd[1484]: time="2025-02-13T15:29:10.098160848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:29:10.098285 containerd[1484]: time="2025-02-13T15:29:10.098247873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:29:10.098285 containerd[1484]: time="2025-02-13T15:29:10.098262460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:10.098584 containerd[1484]: time="2025-02-13T15:29:10.098377829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:10.124674 systemd[1]: Started cri-containerd-d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64.scope - libcontainer container d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64. Feb 13 15:29:10.169171 containerd[1484]: time="2025-02-13T15:29:10.169102509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tm49d,Uid:273cf29d-365f-426a-bc7b-18ab01aedc4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64\"" Feb 13 15:29:10.170067 kubelet[2680]: E0213 15:29:10.170039 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:10.171280 containerd[1484]: time="2025-02-13T15:29:10.171246947Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:29:10.376037 kubelet[2680]: I0213 15:29:10.375967 2680 topology_manager.go:215] "Topology Admit Handler" podUID="b5cec18c-e175-4b02-980b-84a78975f681" podNamespace="kube-system" podName="kube-proxy-zg7w5" Feb 13 15:29:10.382667 kubelet[2680]: I0213 15:29:10.382574 2680 topology_manager.go:215] "Topology Admit Handler" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" podNamespace="kube-system" podName="cilium-ttkr8" Feb 13 15:29:10.386012 systemd[1]: Created slice kubepods-besteffort-podb5cec18c_e175_4b02_980b_84a78975f681.slice - libcontainer container kubepods-besteffort-podb5cec18c_e175_4b02_980b_84a78975f681.slice. Feb 13 15:29:10.393969 systemd[1]: Created slice kubepods-burstable-pod15bf415c_b75d_45be_9b93_843f98205a7f.slice - libcontainer container kubepods-burstable-pod15bf415c_b75d_45be_9b93_843f98205a7f.slice. Feb 13 15:29:10.450532 kubelet[2680]: I0213 15:29:10.450389 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5cec18c-e175-4b02-980b-84a78975f681-kube-proxy\") pod \"kube-proxy-zg7w5\" (UID: \"b5cec18c-e175-4b02-980b-84a78975f681\") " pod="kube-system/kube-proxy-zg7w5" Feb 13 15:29:10.450532 kubelet[2680]: I0213 15:29:10.450535 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cni-path\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451119 kubelet[2680]: I0213 15:29:10.450575 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdcf\" (UniqueName: \"kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-kube-api-access-cgdcf\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451119 kubelet[2680]: I0213 15:29:10.450594 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-hubble-tls\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451119 kubelet[2680]: I0213 15:29:10.450614 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-run\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451119 kubelet[2680]: I0213 15:29:10.450651 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-etc-cni-netd\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451119 kubelet[2680]: I0213 15:29:10.450668 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-kernel\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451119 kubelet[2680]: I0213 15:29:10.450684 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-hostproc\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451339 kubelet[2680]: I0213 15:29:10.450700 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5cec18c-e175-4b02-980b-84a78975f681-lib-modules\") pod \"kube-proxy-zg7w5\" (UID: \"b5cec18c-e175-4b02-980b-84a78975f681\") " pod="kube-system/kube-proxy-zg7w5" Feb 13 15:29:10.451339 kubelet[2680]: I0213 15:29:10.450733 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-lib-modules\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451339 kubelet[2680]: I0213 15:29:10.450806 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-xtables-lock\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.451339 kubelet[2680]: I0213 15:29:10.450869 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr58g\" (UniqueName: \"kubernetes.io/projected/b5cec18c-e175-4b02-980b-84a78975f681-kube-api-access-mr58g\") pod \"kube-proxy-zg7w5\" (UID: \"b5cec18c-e175-4b02-980b-84a78975f681\") " pod="kube-system/kube-proxy-zg7w5" Feb 13 15:29:10.451339 kubelet[2680]: I0213 15:29:10.450896 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-cgroup\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.454589 kubelet[2680]: I0213 15:29:10.450917 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15bf415c-b75d-45be-9b93-843f98205a7f-clustermesh-secrets\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.454589 kubelet[2680]: I0213 15:29:10.450935 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-config-path\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.454589 kubelet[2680]: I0213 15:29:10.450955 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-net\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.454589 kubelet[2680]: I0213 15:29:10.450981 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5cec18c-e175-4b02-980b-84a78975f681-xtables-lock\") pod \"kube-proxy-zg7w5\" (UID: \"b5cec18c-e175-4b02-980b-84a78975f681\") " pod="kube-system/kube-proxy-zg7w5" Feb 13 15:29:10.454589 kubelet[2680]: I0213 15:29:10.450998 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-bpf-maps\") pod \"cilium-ttkr8\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " pod="kube-system/cilium-ttkr8" Feb 13 15:29:10.691714 kubelet[2680]: E0213 15:29:10.691554 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:10.692374 containerd[1484]: time="2025-02-13T15:29:10.692158850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zg7w5,Uid:b5cec18c-e175-4b02-980b-84a78975f681,Namespace:kube-system,Attempt:0,}" Feb 13 15:29:10.696991 kubelet[2680]: E0213 15:29:10.696948 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:10.697424 containerd[1484]: time="2025-02-13T15:29:10.697387237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttkr8,Uid:15bf415c-b75d-45be-9b93-843f98205a7f,Namespace:kube-system,Attempt:0,}" Feb 13 15:29:10.778378 containerd[1484]: time="2025-02-13T15:29:10.777752293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:29:10.778378 containerd[1484]: time="2025-02-13T15:29:10.777810583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:29:10.778378 containerd[1484]: time="2025-02-13T15:29:10.777824660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:10.779373 containerd[1484]: time="2025-02-13T15:29:10.778549666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:10.779464 containerd[1484]: time="2025-02-13T15:29:10.778383310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:29:10.779464 containerd[1484]: time="2025-02-13T15:29:10.778433685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:29:10.779464 containerd[1484]: time="2025-02-13T15:29:10.778471126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:10.779464 containerd[1484]: time="2025-02-13T15:29:10.778559214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:10.802658 systemd[1]: Started cri-containerd-ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376.scope - libcontainer container ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376. Feb 13 15:29:10.806842 systemd[1]: Started cri-containerd-edcf221cce8de1514d2271dc572480b5386a664ddb629b5f22f866a0af24cec3.scope - libcontainer container edcf221cce8de1514d2271dc572480b5386a664ddb629b5f22f866a0af24cec3. Feb 13 15:29:10.834635 containerd[1484]: time="2025-02-13T15:29:10.834483791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttkr8,Uid:15bf415c-b75d-45be-9b93-843f98205a7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\"" Feb 13 15:29:10.835355 kubelet[2680]: E0213 15:29:10.835313 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:10.842695 containerd[1484]: time="2025-02-13T15:29:10.842646043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zg7w5,Uid:b5cec18c-e175-4b02-980b-84a78975f681,Namespace:kube-system,Attempt:0,} returns sandbox id \"edcf221cce8de1514d2271dc572480b5386a664ddb629b5f22f866a0af24cec3\"" Feb 13 15:29:10.843345 kubelet[2680]: E0213 15:29:10.843304 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:10.845324 containerd[1484]: time="2025-02-13T15:29:10.845295418Z" level=info msg="CreateContainer within sandbox \"edcf221cce8de1514d2271dc572480b5386a664ddb629b5f22f866a0af24cec3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:29:10.887916 containerd[1484]: time="2025-02-13T15:29:10.887853913Z" level=info msg="CreateContainer within sandbox \"edcf221cce8de1514d2271dc572480b5386a664ddb629b5f22f866a0af24cec3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9dc2b273699cbfef1040effb2045e242210dca1638f2745b25e95e9640e4796\"" Feb 13 15:29:10.888534 containerd[1484]: time="2025-02-13T15:29:10.888512242Z" level=info msg="StartContainer for \"b9dc2b273699cbfef1040effb2045e242210dca1638f2745b25e95e9640e4796\"" Feb 13 15:29:10.918582 systemd[1]: Started cri-containerd-b9dc2b273699cbfef1040effb2045e242210dca1638f2745b25e95e9640e4796.scope - libcontainer container b9dc2b273699cbfef1040effb2045e242210dca1638f2745b25e95e9640e4796. Feb 13 15:29:10.953564 containerd[1484]: time="2025-02-13T15:29:10.953367208Z" level=info msg="StartContainer for \"b9dc2b273699cbfef1040effb2045e242210dca1638f2745b25e95e9640e4796\" returns successfully" Feb 13 15:29:11.909514 kubelet[2680]: E0213 15:29:11.909472 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:11.919899 kubelet[2680]: I0213 15:29:11.919807 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zg7w5" podStartSLOduration=1.9197888779999999 podStartE2EDuration="1.919788878s" podCreationTimestamp="2025-02-13 15:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:29:11.919550507 +0000 UTC m=+16.126495296" watchObservedRunningTime="2025-02-13 15:29:11.919788878 +0000 UTC m=+16.126733668" Feb 13 15:29:12.462994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925402563.mount: Deactivated successfully. Feb 13 15:29:12.911664 kubelet[2680]: E0213 15:29:12.911627 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:13.111504 containerd[1484]: time="2025-02-13T15:29:13.111436141Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:13.112160 containerd[1484]: time="2025-02-13T15:29:13.112106109Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:29:13.113201 containerd[1484]: time="2025-02-13T15:29:13.113165905Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:13.114683 containerd[1484]: time="2025-02-13T15:29:13.114634264Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.943233135s" Feb 13 15:29:13.114683 containerd[1484]: time="2025-02-13T15:29:13.114673880Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:29:13.115777 containerd[1484]: time="2025-02-13T15:29:13.115603379Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:29:13.116864 containerd[1484]: time="2025-02-13T15:29:13.116832626Z" level=info msg="CreateContainer within sandbox \"d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:29:13.129520 containerd[1484]: time="2025-02-13T15:29:13.129482020Z" level=info msg="CreateContainer within sandbox \"d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\"" Feb 13 15:29:13.129922 containerd[1484]: time="2025-02-13T15:29:13.129897287Z" level=info msg="StartContainer for \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\"" Feb 13 15:29:13.157579 systemd[1]: Started cri-containerd-c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267.scope - libcontainer container c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267. Feb 13 15:29:13.184947 containerd[1484]: time="2025-02-13T15:29:13.184804584Z" level=info msg="StartContainer for \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\" returns successfully" Feb 13 15:29:13.915029 kubelet[2680]: E0213 15:29:13.914995 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:13.925231 kubelet[2680]: I0213 15:29:13.925129 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-tm49d" podStartSLOduration=1.980480249 podStartE2EDuration="4.925082569s" podCreationTimestamp="2025-02-13 15:29:09 +0000 UTC" firstStartedPulling="2025-02-13 15:29:10.170852488 +0000 UTC m=+14.377797277" lastFinishedPulling="2025-02-13 15:29:13.115454808 +0000 UTC m=+17.322399597" observedRunningTime="2025-02-13 15:29:13.924907789 +0000 UTC m=+18.131852608" watchObservedRunningTime="2025-02-13 15:29:13.925082569 +0000 UTC m=+18.132027368" Feb 13 15:29:14.918099 kubelet[2680]: E0213 15:29:14.918056 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:21.742698 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:50264.service - OpenSSH per-connection server daemon (10.0.0.1:50264). Feb 13 15:29:21.816647 sshd[3114]: Accepted publickey for core from 10.0.0.1 port 50264 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:21.818621 sshd-session[3114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:21.823619 systemd-logind[1469]: New session 8 of user core. Feb 13 15:29:21.830632 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:29:21.972999 sshd[3120]: Connection closed by 10.0.0.1 port 50264 Feb 13 15:29:21.973740 sshd-session[3114]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:21.979235 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:50264.service: Deactivated successfully. Feb 13 15:29:21.983592 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:29:21.985392 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:29:21.986681 systemd-logind[1469]: Removed session 8. Feb 13 15:29:23.348935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990093203.mount: Deactivated successfully. Feb 13 15:29:26.246665 containerd[1484]: time="2025-02-13T15:29:26.246605809Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:26.247371 containerd[1484]: time="2025-02-13T15:29:26.247320275Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:29:26.248595 containerd[1484]: time="2025-02-13T15:29:26.248565098Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:26.250136 containerd[1484]: time="2025-02-13T15:29:26.250106841Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.134475088s" Feb 13 15:29:26.250136 containerd[1484]: time="2025-02-13T15:29:26.250135094Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:29:26.258098 containerd[1484]: time="2025-02-13T15:29:26.258054504Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:29:26.273057 containerd[1484]: time="2025-02-13T15:29:26.273012887Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\"" Feb 13 15:29:26.274046 containerd[1484]: time="2025-02-13T15:29:26.273451994Z" level=info msg="StartContainer for \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\"" Feb 13 15:29:26.308583 systemd[1]: Started cri-containerd-c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572.scope - libcontainer container c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572. Feb 13 15:29:26.334757 containerd[1484]: time="2025-02-13T15:29:26.334644054Z" level=info msg="StartContainer for \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\" returns successfully" Feb 13 15:29:26.346867 systemd[1]: cri-containerd-c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572.scope: Deactivated successfully. Feb 13 15:29:26.837518 containerd[1484]: time="2025-02-13T15:29:26.837398060Z" level=info msg="shim disconnected" id=c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572 namespace=k8s.io Feb 13 15:29:26.837518 containerd[1484]: time="2025-02-13T15:29:26.837511534Z" level=warning msg="cleaning up after shim disconnected" id=c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572 namespace=k8s.io Feb 13 15:29:26.837518 containerd[1484]: time="2025-02-13T15:29:26.837529598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:29:26.990828 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:50276.service - OpenSSH per-connection server daemon (10.0.0.1:50276). Feb 13 15:29:27.005286 kubelet[2680]: E0213 15:29:27.005247 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:27.008930 containerd[1484]: time="2025-02-13T15:29:27.008846142Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:29:27.028722 containerd[1484]: time="2025-02-13T15:29:27.028665336Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\"" Feb 13 15:29:27.029648 containerd[1484]: time="2025-02-13T15:29:27.029594626Z" level=info msg="StartContainer for \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\"" Feb 13 15:29:27.050168 sshd[3219]: Accepted publickey for core from 10.0.0.1 port 50276 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:27.052385 sshd-session[3219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:27.059612 systemd[1]: Started cri-containerd-c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248.scope - libcontainer container c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248. Feb 13 15:29:27.064115 systemd-logind[1469]: New session 9 of user core. Feb 13 15:29:27.067581 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:29:27.091959 containerd[1484]: time="2025-02-13T15:29:27.091738588Z" level=info msg="StartContainer for \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\" returns successfully" Feb 13 15:29:27.106206 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:29:27.106484 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:27.107078 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:29:27.112130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:29:27.112464 systemd[1]: cri-containerd-c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248.scope: Deactivated successfully. Feb 13 15:29:27.138626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:27.167947 containerd[1484]: time="2025-02-13T15:29:27.167860811Z" level=info msg="shim disconnected" id=c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248 namespace=k8s.io Feb 13 15:29:27.167947 containerd[1484]: time="2025-02-13T15:29:27.167943898Z" level=warning msg="cleaning up after shim disconnected" id=c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248 namespace=k8s.io Feb 13 15:29:27.167947 containerd[1484]: time="2025-02-13T15:29:27.167956321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:29:27.201704 sshd[3245]: Connection closed by 10.0.0.1 port 50276 Feb 13 15:29:27.202102 sshd-session[3219]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:27.206756 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:50276.service: Deactivated successfully. Feb 13 15:29:27.209115 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:29:27.209840 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:29:27.210799 systemd-logind[1469]: Removed session 9. Feb 13 15:29:27.269596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572-rootfs.mount: Deactivated successfully. Feb 13 15:29:28.009044 kubelet[2680]: E0213 15:29:28.009005 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:28.012328 containerd[1484]: time="2025-02-13T15:29:28.012269186Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:29:28.132095 containerd[1484]: time="2025-02-13T15:29:28.132034183Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\"" Feb 13 15:29:28.132714 containerd[1484]: time="2025-02-13T15:29:28.132685828Z" level=info msg="StartContainer for \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\"" Feb 13 15:29:28.163601 systemd[1]: Started cri-containerd-e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17.scope - libcontainer container e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17. Feb 13 15:29:28.202999 containerd[1484]: time="2025-02-13T15:29:28.202949884Z" level=info msg="StartContainer for \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\" returns successfully" Feb 13 15:29:28.204802 systemd[1]: cri-containerd-e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17.scope: Deactivated successfully. Feb 13 15:29:28.233351 containerd[1484]: time="2025-02-13T15:29:28.233274920Z" level=info msg="shim disconnected" id=e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17 namespace=k8s.io Feb 13 15:29:28.233351 containerd[1484]: time="2025-02-13T15:29:28.233340673Z" level=warning msg="cleaning up after shim disconnected" id=e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17 namespace=k8s.io Feb 13 15:29:28.233351 containerd[1484]: time="2025-02-13T15:29:28.233350020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:29:28.269084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17-rootfs.mount: Deactivated successfully. Feb 13 15:29:29.012223 kubelet[2680]: E0213 15:29:29.012193 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:29.014011 containerd[1484]: time="2025-02-13T15:29:29.013946710Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:29:29.030770 containerd[1484]: time="2025-02-13T15:29:29.030729625Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\"" Feb 13 15:29:29.031371 containerd[1484]: time="2025-02-13T15:29:29.031348419Z" level=info msg="StartContainer for \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\"" Feb 13 15:29:29.062597 systemd[1]: Started cri-containerd-8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d.scope - libcontainer container 8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d. Feb 13 15:29:29.087336 systemd[1]: cri-containerd-8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d.scope: Deactivated successfully. Feb 13 15:29:29.089388 containerd[1484]: time="2025-02-13T15:29:29.089282245Z" level=info msg="StartContainer for \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\" returns successfully" Feb 13 15:29:29.113956 containerd[1484]: time="2025-02-13T15:29:29.113894431Z" level=info msg="shim disconnected" id=8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d namespace=k8s.io Feb 13 15:29:29.113956 containerd[1484]: time="2025-02-13T15:29:29.113952730Z" level=warning msg="cleaning up after shim disconnected" id=8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d namespace=k8s.io Feb 13 15:29:29.113956 containerd[1484]: time="2025-02-13T15:29:29.113962709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:29:29.271229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d-rootfs.mount: Deactivated successfully. Feb 13 15:29:30.016026 kubelet[2680]: E0213 15:29:30.015976 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:30.019467 containerd[1484]: time="2025-02-13T15:29:30.018861143Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:29:30.415050 containerd[1484]: time="2025-02-13T15:29:30.414961065Z" level=info msg="CreateContainer within sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\"" Feb 13 15:29:30.415725 containerd[1484]: time="2025-02-13T15:29:30.415671772Z" level=info msg="StartContainer for \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\"" Feb 13 15:29:30.441405 systemd[1]: run-containerd-runc-k8s.io-81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b-runc.tHUQNf.mount: Deactivated successfully. Feb 13 15:29:30.450580 systemd[1]: Started cri-containerd-81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b.scope - libcontainer container 81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b. Feb 13 15:29:30.493196 containerd[1484]: time="2025-02-13T15:29:30.493140429Z" level=info msg="StartContainer for \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\" returns successfully" Feb 13 15:29:30.629130 kubelet[2680]: I0213 15:29:30.629088 2680 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:29:30.905702 kubelet[2680]: I0213 15:29:30.904950 2680 topology_manager.go:215] "Topology Admit Handler" podUID="f3c76418-4d1d-46f6-86a2-4b0712361df3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r68wd" Feb 13 15:29:30.911124 systemd[1]: Created slice kubepods-burstable-podf3c76418_4d1d_46f6_86a2_4b0712361df3.slice - libcontainer container kubepods-burstable-podf3c76418_4d1d_46f6_86a2_4b0712361df3.slice. Feb 13 15:29:30.962515 kubelet[2680]: I0213 15:29:30.962315 2680 topology_manager.go:215] "Topology Admit Handler" podUID="01812893-7c0f-4dc2-b0ee-d6a356eeaeed" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6xfzv" Feb 13 15:29:30.968722 systemd[1]: Created slice kubepods-burstable-pod01812893_7c0f_4dc2_b0ee_d6a356eeaeed.slice - libcontainer container kubepods-burstable-pod01812893_7c0f_4dc2_b0ee_d6a356eeaeed.slice. Feb 13 15:29:31.019946 kubelet[2680]: I0213 15:29:31.019853 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c76418-4d1d-46f6-86a2-4b0712361df3-config-volume\") pod \"coredns-7db6d8ff4d-r68wd\" (UID: \"f3c76418-4d1d-46f6-86a2-4b0712361df3\") " pod="kube-system/coredns-7db6d8ff4d-r68wd" Feb 13 15:29:31.019946 kubelet[2680]: I0213 15:29:31.019908 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5qq\" (UniqueName: \"kubernetes.io/projected/f3c76418-4d1d-46f6-86a2-4b0712361df3-kube-api-access-bg5qq\") pod \"coredns-7db6d8ff4d-r68wd\" (UID: \"f3c76418-4d1d-46f6-86a2-4b0712361df3\") " pod="kube-system/coredns-7db6d8ff4d-r68wd" Feb 13 15:29:31.023376 kubelet[2680]: E0213 15:29:31.023338 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:31.122393 kubelet[2680]: I0213 15:29:31.120815 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01812893-7c0f-4dc2-b0ee-d6a356eeaeed-config-volume\") pod \"coredns-7db6d8ff4d-6xfzv\" (UID: \"01812893-7c0f-4dc2-b0ee-d6a356eeaeed\") " pod="kube-system/coredns-7db6d8ff4d-6xfzv" Feb 13 15:29:31.122393 kubelet[2680]: I0213 15:29:31.120900 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n86wn\" (UniqueName: \"kubernetes.io/projected/01812893-7c0f-4dc2-b0ee-d6a356eeaeed-kube-api-access-n86wn\") pod \"coredns-7db6d8ff4d-6xfzv\" (UID: \"01812893-7c0f-4dc2-b0ee-d6a356eeaeed\") " pod="kube-system/coredns-7db6d8ff4d-6xfzv" Feb 13 15:29:31.214996 kubelet[2680]: E0213 15:29:31.214850 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:31.215559 containerd[1484]: time="2025-02-13T15:29:31.215511605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r68wd,Uid:f3c76418-4d1d-46f6-86a2-4b0712361df3,Namespace:kube-system,Attempt:0,}" Feb 13 15:29:31.272577 kubelet[2680]: E0213 15:29:31.272530 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:31.273238 containerd[1484]: time="2025-02-13T15:29:31.273175319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6xfzv,Uid:01812893-7c0f-4dc2-b0ee-d6a356eeaeed,Namespace:kube-system,Attempt:0,}" Feb 13 15:29:32.024824 kubelet[2680]: E0213 15:29:32.024786 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:32.226989 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:36864.service - OpenSSH per-connection server daemon (10.0.0.1:36864). Feb 13 15:29:32.267717 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 36864 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:32.269705 sshd-session[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:32.275062 systemd-logind[1469]: New session 10 of user core. Feb 13 15:29:32.284710 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:29:32.475552 sshd[3549]: Connection closed by 10.0.0.1 port 36864 Feb 13 15:29:32.475955 sshd-session[3547]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:32.479935 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:36864.service: Deactivated successfully. Feb 13 15:29:32.482199 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:29:32.482945 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:29:32.483947 systemd-logind[1469]: Removed session 10. Feb 13 15:29:32.883130 systemd-networkd[1415]: cilium_host: Link UP Feb 13 15:29:32.883341 systemd-networkd[1415]: cilium_net: Link UP Feb 13 15:29:32.883649 systemd-networkd[1415]: cilium_net: Gained carrier Feb 13 15:29:32.883883 systemd-networkd[1415]: cilium_host: Gained carrier Feb 13 15:29:32.992631 systemd-networkd[1415]: cilium_vxlan: Link UP Feb 13 15:29:32.992641 systemd-networkd[1415]: cilium_vxlan: Gained carrier Feb 13 15:29:33.026656 kubelet[2680]: E0213 15:29:33.026614 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:33.254646 systemd-networkd[1415]: cilium_host: Gained IPv6LL Feb 13 15:29:33.328483 kernel: NET: Registered PF_ALG protocol family Feb 13 15:29:33.878673 systemd-networkd[1415]: cilium_net: Gained IPv6LL Feb 13 15:29:34.059712 systemd-networkd[1415]: lxc_health: Link UP Feb 13 15:29:34.062314 systemd-networkd[1415]: lxc_health: Gained carrier Feb 13 15:29:34.321020 systemd-networkd[1415]: lxc87d5f12555ec: Link UP Feb 13 15:29:34.328754 kernel: eth0: renamed from tmp49793 Feb 13 15:29:34.339283 systemd-networkd[1415]: lxc87d5f12555ec: Gained carrier Feb 13 15:29:34.341255 systemd-networkd[1415]: lxcad17e817ca1c: Link UP Feb 13 15:29:34.352480 kernel: eth0: renamed from tmp926a4 Feb 13 15:29:34.363310 systemd-networkd[1415]: lxcad17e817ca1c: Gained carrier Feb 13 15:29:34.700992 kubelet[2680]: E0213 15:29:34.700937 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:34.718530 kubelet[2680]: I0213 15:29:34.717508 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ttkr8" podStartSLOduration=9.30253516 podStartE2EDuration="24.717485467s" podCreationTimestamp="2025-02-13 15:29:10 +0000 UTC" firstStartedPulling="2025-02-13 15:29:10.836424251 +0000 UTC m=+15.043369040" lastFinishedPulling="2025-02-13 15:29:26.251374558 +0000 UTC m=+30.458319347" observedRunningTime="2025-02-13 15:29:31.102707714 +0000 UTC m=+35.309652503" watchObservedRunningTime="2025-02-13 15:29:34.717485467 +0000 UTC m=+38.924430256" Feb 13 15:29:34.969538 systemd-networkd[1415]: cilium_vxlan: Gained IPv6LL Feb 13 15:29:35.734640 systemd-networkd[1415]: lxc_health: Gained IPv6LL Feb 13 15:29:35.734991 systemd-networkd[1415]: lxc87d5f12555ec: Gained IPv6LL Feb 13 15:29:35.862583 systemd-networkd[1415]: lxcad17e817ca1c: Gained IPv6LL Feb 13 15:29:37.489636 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:36874.service - OpenSSH per-connection server daemon (10.0.0.1:36874). Feb 13 15:29:37.542435 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 36874 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:37.546184 sshd-session[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:37.555496 systemd-logind[1469]: New session 11 of user core. Feb 13 15:29:37.557902 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:29:37.705022 sshd[3948]: Connection closed by 10.0.0.1 port 36874 Feb 13 15:29:37.705690 sshd-session[3943]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:37.717809 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:36874.service: Deactivated successfully. Feb 13 15:29:37.720170 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:29:37.722014 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:29:37.727402 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:36882.service - OpenSSH per-connection server daemon (10.0.0.1:36882). Feb 13 15:29:37.729387 systemd-logind[1469]: Removed session 11. Feb 13 15:29:37.765974 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 36882 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:37.767596 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:37.773867 containerd[1484]: time="2025-02-13T15:29:37.773712144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:29:37.775123 containerd[1484]: time="2025-02-13T15:29:37.775060257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:29:37.775511 containerd[1484]: time="2025-02-13T15:29:37.775131100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:37.775511 containerd[1484]: time="2025-02-13T15:29:37.775237860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:37.776132 containerd[1484]: time="2025-02-13T15:29:37.775507546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:29:37.776132 containerd[1484]: time="2025-02-13T15:29:37.775554085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:29:37.776132 containerd[1484]: time="2025-02-13T15:29:37.775571026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:37.776132 containerd[1484]: time="2025-02-13T15:29:37.775673108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:29:37.781300 systemd-logind[1469]: New session 12 of user core. Feb 13 15:29:37.793643 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:29:37.811591 systemd[1]: Started cri-containerd-49793abef4fcd7989c9097caa3958e88417b4665dbba5e4d83e99b52dddf7915.scope - libcontainer container 49793abef4fcd7989c9097caa3958e88417b4665dbba5e4d83e99b52dddf7915. Feb 13 15:29:37.813257 systemd[1]: Started cri-containerd-926a4f8933a4f3525c2fc8a79a04fb679a61f74b184f0862b0543d508b452501.scope - libcontainer container 926a4f8933a4f3525c2fc8a79a04fb679a61f74b184f0862b0543d508b452501. Feb 13 15:29:37.824957 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:29:37.829145 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:29:37.860700 containerd[1484]: time="2025-02-13T15:29:37.860646046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r68wd,Uid:f3c76418-4d1d-46f6-86a2-4b0712361df3,Namespace:kube-system,Attempt:0,} returns sandbox id \"49793abef4fcd7989c9097caa3958e88417b4665dbba5e4d83e99b52dddf7915\"" Feb 13 15:29:37.863074 containerd[1484]: time="2025-02-13T15:29:37.862915190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6xfzv,Uid:01812893-7c0f-4dc2-b0ee-d6a356eeaeed,Namespace:kube-system,Attempt:0,} returns sandbox id \"926a4f8933a4f3525c2fc8a79a04fb679a61f74b184f0862b0543d508b452501\"" Feb 13 15:29:37.864712 kubelet[2680]: E0213 15:29:37.863620 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:37.864712 kubelet[2680]: E0213 15:29:37.863773 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:37.867067 containerd[1484]: time="2025-02-13T15:29:37.866984385Z" level=info msg="CreateContainer within sandbox \"926a4f8933a4f3525c2fc8a79a04fb679a61f74b184f0862b0543d508b452501\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:29:37.867181 containerd[1484]: time="2025-02-13T15:29:37.867150557Z" level=info msg="CreateContainer within sandbox \"49793abef4fcd7989c9097caa3958e88417b4665dbba5e4d83e99b52dddf7915\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:29:37.892760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494993608.mount: Deactivated successfully. Feb 13 15:29:37.896132 containerd[1484]: time="2025-02-13T15:29:37.896085530Z" level=info msg="CreateContainer within sandbox \"926a4f8933a4f3525c2fc8a79a04fb679a61f74b184f0862b0543d508b452501\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e2d37ba93860d107fc22a2ec700377fed172e800dd416466f8c3d9317897a77\"" Feb 13 15:29:37.896780 containerd[1484]: time="2025-02-13T15:29:37.896751752Z" level=info msg="StartContainer for \"0e2d37ba93860d107fc22a2ec700377fed172e800dd416466f8c3d9317897a77\"" Feb 13 15:29:37.900712 containerd[1484]: time="2025-02-13T15:29:37.900671447Z" level=info msg="CreateContainer within sandbox \"49793abef4fcd7989c9097caa3958e88417b4665dbba5e4d83e99b52dddf7915\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e29975b417bb95294b5a9731c6b9c97341f264adf918d0ca9b6aaa829c7e51f\"" Feb 13 15:29:37.901259 containerd[1484]: time="2025-02-13T15:29:37.901235446Z" level=info msg="StartContainer for \"7e29975b417bb95294b5a9731c6b9c97341f264adf918d0ca9b6aaa829c7e51f\"" Feb 13 15:29:37.929765 systemd[1]: Started cri-containerd-0e2d37ba93860d107fc22a2ec700377fed172e800dd416466f8c3d9317897a77.scope - libcontainer container 0e2d37ba93860d107fc22a2ec700377fed172e800dd416466f8c3d9317897a77. Feb 13 15:29:37.933080 systemd[1]: Started cri-containerd-7e29975b417bb95294b5a9731c6b9c97341f264adf918d0ca9b6aaa829c7e51f.scope - libcontainer container 7e29975b417bb95294b5a9731c6b9c97341f264adf918d0ca9b6aaa829c7e51f. Feb 13 15:29:37.977074 containerd[1484]: time="2025-02-13T15:29:37.977032104Z" level=info msg="StartContainer for \"0e2d37ba93860d107fc22a2ec700377fed172e800dd416466f8c3d9317897a77\" returns successfully" Feb 13 15:29:37.977311 containerd[1484]: time="2025-02-13T15:29:37.977232682Z" level=info msg="StartContainer for \"7e29975b417bb95294b5a9731c6b9c97341f264adf918d0ca9b6aaa829c7e51f\" returns successfully" Feb 13 15:29:37.989780 sshd[4024]: Connection closed by 10.0.0.1 port 36882 Feb 13 15:29:37.990358 sshd-session[3966]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:38.004824 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:36882.service: Deactivated successfully. Feb 13 15:29:38.008159 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:29:38.009182 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:29:38.015733 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:36884.service - OpenSSH per-connection server daemon (10.0.0.1:36884). Feb 13 15:29:38.019516 systemd-logind[1469]: Removed session 12. Feb 13 15:29:38.040775 kubelet[2680]: E0213 15:29:38.040737 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:38.046709 kubelet[2680]: E0213 15:29:38.046678 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:38.061220 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 36884 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:38.066476 kubelet[2680]: I0213 15:29:38.064713 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6xfzv" podStartSLOduration=29.064691836 podStartE2EDuration="29.064691836s" podCreationTimestamp="2025-02-13 15:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:29:38.064264092 +0000 UTC m=+42.271208881" watchObservedRunningTime="2025-02-13 15:29:38.064691836 +0000 UTC m=+42.271636625" Feb 13 15:29:38.065253 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:38.080863 systemd-logind[1469]: New session 13 of user core. Feb 13 15:29:38.088644 kubelet[2680]: I0213 15:29:38.088570 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r68wd" podStartSLOduration=29.088550015 podStartE2EDuration="29.088550015s" podCreationTimestamp="2025-02-13 15:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:29:38.087815014 +0000 UTC m=+42.294759803" watchObservedRunningTime="2025-02-13 15:29:38.088550015 +0000 UTC m=+42.295494804" Feb 13 15:29:38.089341 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:29:38.211975 sshd[4136]: Connection closed by 10.0.0.1 port 36884 Feb 13 15:29:38.212549 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:38.216507 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:36884.service: Deactivated successfully. Feb 13 15:29:38.218859 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:29:38.219596 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:29:38.220965 systemd-logind[1469]: Removed session 13. Feb 13 15:29:38.782164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356578512.mount: Deactivated successfully. Feb 13 15:29:39.048047 kubelet[2680]: E0213 15:29:39.047838 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:39.111487 kubelet[2680]: I0213 15:29:39.110967 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:29:39.111964 kubelet[2680]: E0213 15:29:39.111936 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:40.049639 kubelet[2680]: E0213 15:29:40.049601 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:40.050156 kubelet[2680]: E0213 15:29:40.049789 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:41.216910 kubelet[2680]: E0213 15:29:41.216184 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:42.054031 kubelet[2680]: E0213 15:29:42.053987 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:43.227545 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). Feb 13 15:29:43.267838 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:43.269284 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:43.273664 systemd-logind[1469]: New session 14 of user core. Feb 13 15:29:43.286615 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:29:43.401459 sshd[4168]: Connection closed by 10.0.0.1 port 39156 Feb 13 15:29:43.401845 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:43.406207 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:39156.service: Deactivated successfully. Feb 13 15:29:43.408653 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:29:43.409462 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:29:43.410322 systemd-logind[1469]: Removed session 14. Feb 13 15:29:48.414820 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:39160.service - OpenSSH per-connection server daemon (10.0.0.1:39160). Feb 13 15:29:48.458070 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 39160 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:48.459726 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:48.464736 systemd-logind[1469]: New session 15 of user core. Feb 13 15:29:48.474637 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:29:48.594325 sshd[4184]: Connection closed by 10.0.0.1 port 39160 Feb 13 15:29:48.594773 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:48.603701 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:39160.service: Deactivated successfully. Feb 13 15:29:48.605917 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:29:48.607914 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:29:48.614716 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:39168.service - OpenSSH per-connection server daemon (10.0.0.1:39168). Feb 13 15:29:48.615890 systemd-logind[1469]: Removed session 15. Feb 13 15:29:48.653648 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 39168 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:48.655250 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:48.660472 systemd-logind[1469]: New session 16 of user core. Feb 13 15:29:48.667567 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:29:48.908004 sshd[4199]: Connection closed by 10.0.0.1 port 39168 Feb 13 15:29:48.908556 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:48.917576 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:39168.service: Deactivated successfully. Feb 13 15:29:48.920023 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:29:48.921976 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:29:48.923566 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:39174.service - OpenSSH per-connection server daemon (10.0.0.1:39174). Feb 13 15:29:48.924633 systemd-logind[1469]: Removed session 16. Feb 13 15:29:48.966635 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 39174 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:48.968292 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:48.973045 systemd-logind[1469]: New session 17 of user core. Feb 13 15:29:48.982629 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:29:50.503565 sshd[4213]: Connection closed by 10.0.0.1 port 39174 Feb 13 15:29:50.504159 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:50.513878 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:39174.service: Deactivated successfully. Feb 13 15:29:50.518382 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:29:50.520684 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:29:50.533789 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Feb 13 15:29:50.537204 systemd-logind[1469]: Removed session 17. Feb 13 15:29:50.573729 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:50.575677 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:50.580641 systemd-logind[1469]: New session 18 of user core. Feb 13 15:29:50.595597 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:29:50.836824 sshd[4234]: Connection closed by 10.0.0.1 port 42560 Feb 13 15:29:50.837580 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:50.853172 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:42560.service: Deactivated successfully. Feb 13 15:29:50.855753 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:29:50.856851 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:29:50.870800 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:42572.service - OpenSSH per-connection server daemon (10.0.0.1:42572). Feb 13 15:29:50.871622 systemd-logind[1469]: Removed session 18. Feb 13 15:29:50.908356 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 42572 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:50.910266 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:50.915237 systemd-logind[1469]: New session 19 of user core. Feb 13 15:29:50.925600 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:29:51.038058 sshd[4248]: Connection closed by 10.0.0.1 port 42572 Feb 13 15:29:51.038496 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:51.043247 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:42572.service: Deactivated successfully. Feb 13 15:29:51.045965 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:29:51.046820 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:29:51.047876 systemd-logind[1469]: Removed session 19. Feb 13 15:29:56.051486 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:42574.service - OpenSSH per-connection server daemon (10.0.0.1:42574). Feb 13 15:29:56.092311 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 42574 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:29:56.094057 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:56.098733 systemd-logind[1469]: New session 20 of user core. Feb 13 15:29:56.109600 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:29:56.216566 sshd[4266]: Connection closed by 10.0.0.1 port 42574 Feb 13 15:29:56.216930 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:56.221080 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:42574.service: Deactivated successfully. Feb 13 15:29:56.223291 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:29:56.224116 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:29:56.225125 systemd-logind[1469]: Removed session 20. Feb 13 15:30:01.234381 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:54296.service - OpenSSH per-connection server daemon (10.0.0.1:54296). Feb 13 15:30:01.277996 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 54296 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:30:01.279847 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:01.284041 systemd-logind[1469]: New session 21 of user core. Feb 13 15:30:01.294614 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:30:01.397927 sshd[4284]: Connection closed by 10.0.0.1 port 54296 Feb 13 15:30:01.398277 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:01.402180 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:54296.service: Deactivated successfully. Feb 13 15:30:01.404270 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:30:01.404929 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:30:01.405734 systemd-logind[1469]: Removed session 21. Feb 13 15:30:06.412184 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:54302.service - OpenSSH per-connection server daemon (10.0.0.1:54302). Feb 13 15:30:06.453682 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 54302 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:30:06.455321 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:06.460229 systemd-logind[1469]: New session 22 of user core. Feb 13 15:30:06.467597 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:30:06.579745 sshd[4300]: Connection closed by 10.0.0.1 port 54302 Feb 13 15:30:06.580149 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:06.584084 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:54302.service: Deactivated successfully. Feb 13 15:30:06.586224 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:30:06.586938 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:30:06.587929 systemd-logind[1469]: Removed session 22. Feb 13 15:30:11.605765 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:48376.service - OpenSSH per-connection server daemon (10.0.0.1:48376). Feb 13 15:30:11.641238 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 48376 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:30:11.642878 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:11.647027 systemd-logind[1469]: New session 23 of user core. Feb 13 15:30:11.654575 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:30:11.761775 sshd[4318]: Connection closed by 10.0.0.1 port 48376 Feb 13 15:30:11.762220 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:11.772434 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:48376.service: Deactivated successfully. Feb 13 15:30:11.774562 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:30:11.776105 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:30:11.787869 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:48390.service - OpenSSH per-connection server daemon (10.0.0.1:48390). Feb 13 15:30:11.789155 systemd-logind[1469]: Removed session 23. Feb 13 15:30:11.824400 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 48390 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:30:11.825930 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:11.830690 systemd-logind[1469]: New session 24 of user core. Feb 13 15:30:11.841597 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:30:13.192735 containerd[1484]: time="2025-02-13T15:30:13.192686819Z" level=info msg="StopContainer for \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\" with timeout 30 (s)" Feb 13 15:30:13.201121 systemd[1]: run-containerd-runc-k8s.io-81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b-runc.vMXrM7.mount: Deactivated successfully. Feb 13 15:30:13.213923 containerd[1484]: time="2025-02-13T15:30:13.213883964Z" level=info msg="Stop container \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\" with signal terminated" Feb 13 15:30:13.230136 systemd[1]: cri-containerd-c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267.scope: Deactivated successfully. Feb 13 15:30:13.235060 containerd[1484]: time="2025-02-13T15:30:13.234996197Z" level=info msg="StopContainer for \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\" with timeout 2 (s)" Feb 13 15:30:13.235314 containerd[1484]: time="2025-02-13T15:30:13.235283388Z" level=info msg="Stop container \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\" with signal terminated" Feb 13 15:30:13.237621 containerd[1484]: time="2025-02-13T15:30:13.236197904Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:30:13.246327 systemd-networkd[1415]: lxc_health: Link DOWN Feb 13 15:30:13.246335 systemd-networkd[1415]: lxc_health: Lost carrier Feb 13 15:30:13.256519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267-rootfs.mount: Deactivated successfully. Feb 13 15:30:13.268014 systemd[1]: cri-containerd-81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b.scope: Deactivated successfully. Feb 13 15:30:13.268374 systemd[1]: cri-containerd-81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b.scope: Consumed 6.986s CPU time, 124.8M memory peak, 164K read from disk, 13.3M written to disk. Feb 13 15:30:13.289140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b-rootfs.mount: Deactivated successfully. Feb 13 15:30:13.323743 containerd[1484]: time="2025-02-13T15:30:13.323644067Z" level=info msg="shim disconnected" id=81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b namespace=k8s.io Feb 13 15:30:13.323743 containerd[1484]: time="2025-02-13T15:30:13.323719322Z" level=warning msg="cleaning up after shim disconnected" id=81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b namespace=k8s.io Feb 13 15:30:13.323743 containerd[1484]: time="2025-02-13T15:30:13.323729912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:13.324017 containerd[1484]: time="2025-02-13T15:30:13.323856205Z" level=info msg="shim disconnected" id=c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267 namespace=k8s.io Feb 13 15:30:13.324017 containerd[1484]: time="2025-02-13T15:30:13.323882696Z" level=warning msg="cleaning up after shim disconnected" id=c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267 namespace=k8s.io Feb 13 15:30:13.324017 containerd[1484]: time="2025-02-13T15:30:13.323890721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:13.387695 containerd[1484]: time="2025-02-13T15:30:13.387607317Z" level=info msg="StopContainer for \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\" returns successfully" Feb 13 15:30:13.394038 containerd[1484]: time="2025-02-13T15:30:13.393987476Z" level=info msg="StopPodSandbox for \"d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64\"" Feb 13 15:30:13.399196 containerd[1484]: time="2025-02-13T15:30:13.394043454Z" level=info msg="Container to stop \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:30:13.401437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64-shm.mount: Deactivated successfully. Feb 13 15:30:13.407412 systemd[1]: cri-containerd-d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64.scope: Deactivated successfully. Feb 13 15:30:13.418893 containerd[1484]: time="2025-02-13T15:30:13.418833396Z" level=info msg="StopContainer for \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\" returns successfully" Feb 13 15:30:13.419401 containerd[1484]: time="2025-02-13T15:30:13.419350450Z" level=info msg="StopPodSandbox for \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\"" Feb 13 15:30:13.419583 containerd[1484]: time="2025-02-13T15:30:13.419403231Z" level=info msg="Container to stop \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:30:13.419583 containerd[1484]: time="2025-02-13T15:30:13.419462044Z" level=info msg="Container to stop \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:30:13.419583 containerd[1484]: time="2025-02-13T15:30:13.419471722Z" level=info msg="Container to stop \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:30:13.419583 containerd[1484]: time="2025-02-13T15:30:13.419480309Z" level=info msg="Container to stop \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:30:13.419583 containerd[1484]: time="2025-02-13T15:30:13.419488494Z" level=info msg="Container to stop \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:30:13.426043 systemd[1]: cri-containerd-ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376.scope: Deactivated successfully. Feb 13 15:30:13.439504 containerd[1484]: time="2025-02-13T15:30:13.439424409Z" level=info msg="shim disconnected" id=d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64 namespace=k8s.io Feb 13 15:30:13.439504 containerd[1484]: time="2025-02-13T15:30:13.439494924Z" level=warning msg="cleaning up after shim disconnected" id=d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64 namespace=k8s.io Feb 13 15:30:13.439504 containerd[1484]: time="2025-02-13T15:30:13.439504472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:13.454500 containerd[1484]: time="2025-02-13T15:30:13.454297282Z" level=info msg="shim disconnected" id=ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376 namespace=k8s.io Feb 13 15:30:13.454500 containerd[1484]: time="2025-02-13T15:30:13.454348730Z" level=warning msg="cleaning up after shim disconnected" id=ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376 namespace=k8s.io Feb 13 15:30:13.454500 containerd[1484]: time="2025-02-13T15:30:13.454358480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:13.456011 containerd[1484]: time="2025-02-13T15:30:13.455964282Z" level=info msg="TearDown network for sandbox \"d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64\" successfully" Feb 13 15:30:13.456011 containerd[1484]: time="2025-02-13T15:30:13.455996203Z" level=info msg="StopPodSandbox for \"d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64\" returns successfully" Feb 13 15:30:13.467706 kubelet[2680]: I0213 15:30:13.467654 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/273cf29d-365f-426a-bc7b-18ab01aedc4a-cilium-config-path\") pod \"273cf29d-365f-426a-bc7b-18ab01aedc4a\" (UID: \"273cf29d-365f-426a-bc7b-18ab01aedc4a\") " Feb 13 15:30:13.467706 kubelet[2680]: I0213 15:30:13.467724 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mzs\" (UniqueName: \"kubernetes.io/projected/273cf29d-365f-426a-bc7b-18ab01aedc4a-kube-api-access-w5mzs\") pod \"273cf29d-365f-426a-bc7b-18ab01aedc4a\" (UID: \"273cf29d-365f-426a-bc7b-18ab01aedc4a\") " Feb 13 15:30:13.471376 containerd[1484]: time="2025-02-13T15:30:13.471340893Z" level=info msg="TearDown network for sandbox \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" successfully" Feb 13 15:30:13.471515 containerd[1484]: time="2025-02-13T15:30:13.471493286Z" level=info msg="StopPodSandbox for \"ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376\" returns successfully" Feb 13 15:30:13.472611 kubelet[2680]: I0213 15:30:13.472569 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/273cf29d-365f-426a-bc7b-18ab01aedc4a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "273cf29d-365f-426a-bc7b-18ab01aedc4a" (UID: "273cf29d-365f-426a-bc7b-18ab01aedc4a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:30:13.472807 kubelet[2680]: I0213 15:30:13.472769 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273cf29d-365f-426a-bc7b-18ab01aedc4a-kube-api-access-w5mzs" (OuterVolumeSpecName: "kube-api-access-w5mzs") pod "273cf29d-365f-426a-bc7b-18ab01aedc4a" (UID: "273cf29d-365f-426a-bc7b-18ab01aedc4a"). InnerVolumeSpecName "kube-api-access-w5mzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:30:13.568814 kubelet[2680]: I0213 15:30:13.568761 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/273cf29d-365f-426a-bc7b-18ab01aedc4a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.568814 kubelet[2680]: I0213 15:30:13.568792 2680 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w5mzs\" (UniqueName: \"kubernetes.io/projected/273cf29d-365f-426a-bc7b-18ab01aedc4a-kube-api-access-w5mzs\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.669183 kubelet[2680]: I0213 15:30:13.669142 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-xtables-lock\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669250 kubelet[2680]: I0213 15:30:13.669190 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-run\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669250 kubelet[2680]: I0213 15:30:13.669225 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-hostproc\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669307 kubelet[2680]: I0213 15:30:13.669257 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15bf415c-b75d-45be-9b93-843f98205a7f-clustermesh-secrets\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669307 kubelet[2680]: I0213 15:30:13.669271 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669369 kubelet[2680]: I0213 15:30:13.669286 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgdcf\" (UniqueName: \"kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-kube-api-access-cgdcf\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669369 kubelet[2680]: I0213 15:30:13.669315 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669369 kubelet[2680]: I0213 15:30:13.669355 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-hostproc" (OuterVolumeSpecName: "hostproc") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669563 kubelet[2680]: I0213 15:30:13.669382 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-etc-cni-netd\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669563 kubelet[2680]: I0213 15:30:13.669404 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-lib-modules\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669563 kubelet[2680]: I0213 15:30:13.669428 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-cgroup\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669563 kubelet[2680]: I0213 15:30:13.669467 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-config-path\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669563 kubelet[2680]: I0213 15:30:13.669489 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cni-path\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669563 kubelet[2680]: I0213 15:30:13.669504 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-bpf-maps\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669711 kubelet[2680]: I0213 15:30:13.669521 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-hubble-tls\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669711 kubelet[2680]: I0213 15:30:13.669534 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-kernel\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669711 kubelet[2680]: I0213 15:30:13.669551 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-net\") pod \"15bf415c-b75d-45be-9b93-843f98205a7f\" (UID: \"15bf415c-b75d-45be-9b93-843f98205a7f\") " Feb 13 15:30:13.669711 kubelet[2680]: I0213 15:30:13.669574 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.669711 kubelet[2680]: I0213 15:30:13.669585 2680 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.669711 kubelet[2680]: I0213 15:30:13.669594 2680 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.669711 kubelet[2680]: I0213 15:30:13.669613 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669872 kubelet[2680]: I0213 15:30:13.669631 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669872 kubelet[2680]: I0213 15:30:13.669647 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669872 kubelet[2680]: I0213 15:30:13.669661 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669872 kubelet[2680]: I0213 15:30:13.669750 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.669872 kubelet[2680]: I0213 15:30:13.669775 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cni-path" (OuterVolumeSpecName: "cni-path") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.673494 kubelet[2680]: I0213 15:30:13.673382 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:30:13.675428 kubelet[2680]: I0213 15:30:13.675384 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:30:13.675772 kubelet[2680]: I0213 15:30:13.675730 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15bf415c-b75d-45be-9b93-843f98205a7f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:30:13.675948 kubelet[2680]: I0213 15:30:13.675906 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-kube-api-access-cgdcf" (OuterVolumeSpecName: "kube-api-access-cgdcf") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "kube-api-access-cgdcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:30:13.676092 kubelet[2680]: I0213 15:30:13.676064 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15bf415c-b75d-45be-9b93-843f98205a7f" (UID: "15bf415c-b75d-45be-9b93-843f98205a7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770258 2680 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770315 2680 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770327 2680 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770339 2680 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770348 2680 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cgdcf\" (UniqueName: \"kubernetes.io/projected/15bf415c-b75d-45be-9b93-843f98205a7f-kube-api-access-cgdcf\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770357 2680 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770365 2680 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15bf415c-b75d-45be-9b93-843f98205a7f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.770680 kubelet[2680]: I0213 15:30:13.770405 2680 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.771067 kubelet[2680]: I0213 15:30:13.770423 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.771067 kubelet[2680]: I0213 15:30:13.770432 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15bf415c-b75d-45be-9b93-843f98205a7f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.771067 kubelet[2680]: I0213 15:30:13.770471 2680 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15bf415c-b75d-45be-9b93-843f98205a7f-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:30:13.882074 systemd[1]: Removed slice kubepods-besteffort-pod273cf29d_365f_426a_bc7b_18ab01aedc4a.slice - libcontainer container kubepods-besteffort-pod273cf29d_365f_426a_bc7b_18ab01aedc4a.slice. Feb 13 15:30:13.884292 systemd[1]: Removed slice kubepods-burstable-pod15bf415c_b75d_45be_9b93_843f98205a7f.slice - libcontainer container kubepods-burstable-pod15bf415c_b75d_45be_9b93_843f98205a7f.slice. Feb 13 15:30:13.884435 systemd[1]: kubepods-burstable-pod15bf415c_b75d_45be_9b93_843f98205a7f.slice: Consumed 7.107s CPU time, 125.1M memory peak, 224K read from disk, 13.3M written to disk. Feb 13 15:30:14.117278 kubelet[2680]: I0213 15:30:14.117235 2680 scope.go:117] "RemoveContainer" containerID="c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267" Feb 13 15:30:14.125569 containerd[1484]: time="2025-02-13T15:30:14.125503024Z" level=info msg="RemoveContainer for \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\"" Feb 13 15:30:14.166581 containerd[1484]: time="2025-02-13T15:30:14.166489343Z" level=info msg="RemoveContainer for \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\" returns successfully" Feb 13 15:30:14.166986 kubelet[2680]: I0213 15:30:14.166937 2680 scope.go:117] "RemoveContainer" containerID="c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267" Feb 13 15:30:14.167606 containerd[1484]: time="2025-02-13T15:30:14.167484583Z" level=error msg="ContainerStatus for \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\": not found" Feb 13 15:30:14.173824 kubelet[2680]: E0213 15:30:14.173786 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\": not found" containerID="c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267" Feb 13 15:30:14.173950 kubelet[2680]: I0213 15:30:14.173824 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267"} err="failed to get container status \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2d1c2d2c57a28a1e2731f0d59b8432c7556511d69dc0bc045cd2b97ef39f267\": not found" Feb 13 15:30:14.173950 kubelet[2680]: I0213 15:30:14.173909 2680 scope.go:117] "RemoveContainer" containerID="81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b" Feb 13 15:30:14.175181 containerd[1484]: time="2025-02-13T15:30:14.175144901Z" level=info msg="RemoveContainer for \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\"" Feb 13 15:30:14.194623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376-rootfs.mount: Deactivated successfully. Feb 13 15:30:14.194767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec65e1b0fa887b608238ac8350c5913476c0d528a85b7e97a8cd8ca0bea20376-shm.mount: Deactivated successfully. Feb 13 15:30:14.194855 systemd[1]: var-lib-kubelet-pods-15bf415c\x2db75d\x2d45be\x2d9b93\x2d843f98205a7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgdcf.mount: Deactivated successfully. Feb 13 15:30:14.194953 systemd[1]: var-lib-kubelet-pods-15bf415c\x2db75d\x2d45be\x2d9b93\x2d843f98205a7f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:30:14.195041 systemd[1]: var-lib-kubelet-pods-15bf415c\x2db75d\x2d45be\x2d9b93\x2d843f98205a7f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:30:14.195133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0e73684e85530877ecc2ce57766e8db4c985f2f7e4de7e1f4021140d8744f64-rootfs.mount: Deactivated successfully. Feb 13 15:30:14.195221 systemd[1]: var-lib-kubelet-pods-273cf29d\x2d365f\x2d426a\x2dbc7b\x2d18ab01aedc4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5mzs.mount: Deactivated successfully. Feb 13 15:30:14.221724 containerd[1484]: time="2025-02-13T15:30:14.221678984Z" level=info msg="RemoveContainer for \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\" returns successfully" Feb 13 15:30:14.222108 kubelet[2680]: I0213 15:30:14.222003 2680 scope.go:117] "RemoveContainer" containerID="8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d" Feb 13 15:30:14.223101 containerd[1484]: time="2025-02-13T15:30:14.223064592Z" level=info msg="RemoveContainer for \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\"" Feb 13 15:30:14.288528 containerd[1484]: time="2025-02-13T15:30:14.288490713Z" level=info msg="RemoveContainer for \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\" returns successfully" Feb 13 15:30:14.288781 kubelet[2680]: I0213 15:30:14.288740 2680 scope.go:117] "RemoveContainer" containerID="e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17" Feb 13 15:30:14.289853 containerd[1484]: time="2025-02-13T15:30:14.289817850Z" level=info msg="RemoveContainer for \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\"" Feb 13 15:30:14.358972 containerd[1484]: time="2025-02-13T15:30:14.358900757Z" level=info msg="RemoveContainer for \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\" returns successfully" Feb 13 15:30:14.359240 kubelet[2680]: I0213 15:30:14.359206 2680 scope.go:117] "RemoveContainer" containerID="c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248" Feb 13 15:30:14.360721 containerd[1484]: time="2025-02-13T15:30:14.360375516Z" level=info msg="RemoveContainer for \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\"" Feb 13 15:30:14.368991 containerd[1484]: time="2025-02-13T15:30:14.368867049Z" level=info msg="RemoveContainer for \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\" returns successfully" Feb 13 15:30:14.369090 kubelet[2680]: I0213 15:30:14.369055 2680 scope.go:117] "RemoveContainer" containerID="c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572" Feb 13 15:30:14.370676 containerd[1484]: time="2025-02-13T15:30:14.370612137Z" level=info msg="RemoveContainer for \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\"" Feb 13 15:30:14.377219 containerd[1484]: time="2025-02-13T15:30:14.377163136Z" level=info msg="RemoveContainer for \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\" returns successfully" Feb 13 15:30:14.377510 kubelet[2680]: I0213 15:30:14.377470 2680 scope.go:117] "RemoveContainer" containerID="81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b" Feb 13 15:30:14.377790 containerd[1484]: time="2025-02-13T15:30:14.377731767Z" level=error msg="ContainerStatus for \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\": not found" Feb 13 15:30:14.377953 kubelet[2680]: E0213 15:30:14.377920 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\": not found" containerID="81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b" Feb 13 15:30:14.378016 kubelet[2680]: I0213 15:30:14.377959 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b"} err="failed to get container status \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"81de1d25a4681a9169c6dc873b878eec430e86f9a72313d3762ec101859c1c0b\": not found" Feb 13 15:30:14.378016 kubelet[2680]: I0213 15:30:14.377991 2680 scope.go:117] "RemoveContainer" containerID="8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d" Feb 13 15:30:14.378262 containerd[1484]: time="2025-02-13T15:30:14.378190758Z" level=error msg="ContainerStatus for \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\": not found" Feb 13 15:30:14.378363 kubelet[2680]: E0213 15:30:14.378305 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\": not found" containerID="8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d" Feb 13 15:30:14.378363 kubelet[2680]: I0213 15:30:14.378326 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d"} err="failed to get container status \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f6fd2e30b6d441c70a1c3fda2722194462961a25e549686cde2a0532aa3652d\": not found" Feb 13 15:30:14.378363 kubelet[2680]: I0213 15:30:14.378344 2680 scope.go:117] "RemoveContainer" containerID="e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17" Feb 13 15:30:14.378532 containerd[1484]: time="2025-02-13T15:30:14.378508377Z" level=error msg="ContainerStatus for \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\": not found" Feb 13 15:30:14.378650 kubelet[2680]: E0213 15:30:14.378619 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\": not found" containerID="e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17" Feb 13 15:30:14.378696 kubelet[2680]: I0213 15:30:14.378648 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17"} err="failed to get container status \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\": rpc error: code = NotFound desc = an error occurred when try to find container \"e446d0f8de57de36f94193bf287de01f382b18fbc1d6200b1aa4ee2181176d17\": not found" Feb 13 15:30:14.378696 kubelet[2680]: I0213 15:30:14.378670 2680 scope.go:117] "RemoveContainer" containerID="c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248" Feb 13 15:30:14.378964 containerd[1484]: time="2025-02-13T15:30:14.378910328Z" level=error msg="ContainerStatus for \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\": not found" Feb 13 15:30:14.379152 kubelet[2680]: E0213 15:30:14.379120 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\": not found" containerID="c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248" Feb 13 15:30:14.379212 kubelet[2680]: I0213 15:30:14.379156 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248"} err="failed to get container status \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8345f399397486b319de9ed4356a2b29f1d8d1a9ee326fd2cba1bf7bb912248\": not found" Feb 13 15:30:14.379212 kubelet[2680]: I0213 15:30:14.379186 2680 scope.go:117] "RemoveContainer" containerID="c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572" Feb 13 15:30:14.379505 containerd[1484]: time="2025-02-13T15:30:14.379457648Z" level=error msg="ContainerStatus for \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\": not found" Feb 13 15:30:14.379659 kubelet[2680]: E0213 15:30:14.379630 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\": not found" containerID="c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572" Feb 13 15:30:14.379721 kubelet[2680]: I0213 15:30:14.379663 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572"} err="failed to get container status \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\": rpc error: code = NotFound desc = an error occurred when try to find container \"c692ed76cd08bee96243defcad493097db407c8da56704ee4f19470d40554572\": not found" Feb 13 15:30:14.873272 kubelet[2680]: E0213 15:30:14.873217 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:15.137658 sshd[4333]: Connection closed by 10.0.0.1 port 48390 Feb 13 15:30:15.138363 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:15.156054 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:48390.service: Deactivated successfully. Feb 13 15:30:15.158281 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:30:15.159956 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:30:15.166833 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:48398.service - OpenSSH per-connection server daemon (10.0.0.1:48398). Feb 13 15:30:15.167708 systemd-logind[1469]: Removed session 24. Feb 13 15:30:15.206726 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 48398 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:30:15.208200 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:15.213048 systemd-logind[1469]: New session 25 of user core. Feb 13 15:30:15.224674 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:30:15.747292 sshd[4497]: Connection closed by 10.0.0.1 port 48398 Feb 13 15:30:15.748125 sshd-session[4494]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:15.762539 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:48398.service: Deactivated successfully. Feb 13 15:30:15.763707 kubelet[2680]: I0213 15:30:15.763650 2680 topology_manager.go:215] "Topology Admit Handler" podUID="a23347fd-eea3-4779-8e3e-aae7bf4e936a" podNamespace="kube-system" podName="cilium-9dc7f" Feb 13 15:30:15.763782 kubelet[2680]: E0213 15:30:15.763739 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" containerName="apply-sysctl-overwrites" Feb 13 15:30:15.763782 kubelet[2680]: E0213 15:30:15.763755 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" containerName="clean-cilium-state" Feb 13 15:30:15.763782 kubelet[2680]: E0213 15:30:15.763764 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="273cf29d-365f-426a-bc7b-18ab01aedc4a" containerName="cilium-operator" Feb 13 15:30:15.763782 kubelet[2680]: E0213 15:30:15.763772 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" containerName="mount-cgroup" Feb 13 15:30:15.763782 kubelet[2680]: E0213 15:30:15.763780 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" containerName="mount-bpf-fs" Feb 13 15:30:15.763926 kubelet[2680]: E0213 15:30:15.763789 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" containerName="cilium-agent" Feb 13 15:30:15.763926 kubelet[2680]: I0213 15:30:15.763835 2680 memory_manager.go:354] "RemoveStaleState removing state" podUID="273cf29d-365f-426a-bc7b-18ab01aedc4a" containerName="cilium-operator" Feb 13 15:30:15.763926 kubelet[2680]: I0213 15:30:15.763844 2680 memory_manager.go:354] "RemoveStaleState removing state" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" containerName="cilium-agent" Feb 13 15:30:15.768902 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:30:15.772329 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:30:15.782890 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:48410.service - OpenSSH per-connection server daemon (10.0.0.1:48410). Feb 13 15:30:15.784373 systemd-logind[1469]: Removed session 25. Feb 13 15:30:15.794757 systemd[1]: Created slice kubepods-burstable-poda23347fd_eea3_4779_8e3e_aae7bf4e936a.slice - libcontainer container kubepods-burstable-poda23347fd_eea3_4779_8e3e_aae7bf4e936a.slice. Feb 13 15:30:15.825504 sshd[4508]: Accepted publickey for core from 10.0.0.1 port 48410 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:30:15.827272 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:15.832571 systemd-logind[1469]: New session 26 of user core. Feb 13 15:30:15.839594 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:30:15.875580 kubelet[2680]: I0213 15:30:15.875543 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15bf415c-b75d-45be-9b93-843f98205a7f" path="/var/lib/kubelet/pods/15bf415c-b75d-45be-9b93-843f98205a7f/volumes" Feb 13 15:30:15.876424 kubelet[2680]: I0213 15:30:15.876396 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="273cf29d-365f-426a-bc7b-18ab01aedc4a" path="/var/lib/kubelet/pods/273cf29d-365f-426a-bc7b-18ab01aedc4a/volumes" Feb 13 15:30:15.882456 kubelet[2680]: I0213 15:30:15.882410 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-cilium-run\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882583 kubelet[2680]: I0213 15:30:15.882494 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-lib-modules\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882583 kubelet[2680]: I0213 15:30:15.882522 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a23347fd-eea3-4779-8e3e-aae7bf4e936a-cilium-config-path\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882583 kubelet[2680]: I0213 15:30:15.882537 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a23347fd-eea3-4779-8e3e-aae7bf4e936a-hubble-tls\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882583 kubelet[2680]: I0213 15:30:15.882558 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-cni-path\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882583 kubelet[2680]: I0213 15:30:15.882575 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-xtables-lock\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882696 kubelet[2680]: I0213 15:30:15.882591 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-bpf-maps\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882696 kubelet[2680]: I0213 15:30:15.882609 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-etc-cni-netd\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882696 kubelet[2680]: I0213 15:30:15.882623 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a23347fd-eea3-4779-8e3e-aae7bf4e936a-clustermesh-secrets\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882696 kubelet[2680]: I0213 15:30:15.882638 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a23347fd-eea3-4779-8e3e-aae7bf4e936a-cilium-ipsec-secrets\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882696 kubelet[2680]: I0213 15:30:15.882654 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-host-proc-sys-kernel\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882806 kubelet[2680]: I0213 15:30:15.882669 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-host-proc-sys-net\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882806 kubelet[2680]: I0213 15:30:15.882688 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-cilium-cgroup\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882806 kubelet[2680]: I0213 15:30:15.882705 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a23347fd-eea3-4779-8e3e-aae7bf4e936a-hostproc\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.882806 kubelet[2680]: I0213 15:30:15.882768 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2b4k\" (UniqueName: \"kubernetes.io/projected/a23347fd-eea3-4779-8e3e-aae7bf4e936a-kube-api-access-f2b4k\") pod \"cilium-9dc7f\" (UID: \"a23347fd-eea3-4779-8e3e-aae7bf4e936a\") " pod="kube-system/cilium-9dc7f" Feb 13 15:30:15.890360 sshd[4511]: Connection closed by 10.0.0.1 port 48410 Feb 13 15:30:15.890786 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:15.906520 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:48410.service: Deactivated successfully. Feb 13 15:30:15.908928 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:30:15.910812 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:30:15.919830 systemd[1]: Started sshd@26-10.0.0.34:22-10.0.0.1:48426.service - OpenSSH per-connection server daemon (10.0.0.1:48426). Feb 13 15:30:15.921530 systemd-logind[1469]: Removed session 26. Feb 13 15:30:15.926357 kubelet[2680]: E0213 15:30:15.926323 2680 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:30:15.955703 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 48426 ssh2: RSA SHA256:q0iwLncKxqtVn4+A43RQd5OyuztjvPIVAIQ8iEfp9Cg Feb 13 15:30:15.957341 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:15.962033 systemd-logind[1469]: New session 27 of user core. Feb 13 15:30:15.975581 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:30:16.100745 kubelet[2680]: E0213 15:30:16.100596 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:16.101397 containerd[1484]: time="2025-02-13T15:30:16.101172508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9dc7f,Uid:a23347fd-eea3-4779-8e3e-aae7bf4e936a,Namespace:kube-system,Attempt:0,}" Feb 13 15:30:16.125207 containerd[1484]: time="2025-02-13T15:30:16.124989891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:30:16.125207 containerd[1484]: time="2025-02-13T15:30:16.125131162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:30:16.125207 containerd[1484]: time="2025-02-13T15:30:16.125159616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:30:16.125642 containerd[1484]: time="2025-02-13T15:30:16.125256272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:30:16.150654 systemd[1]: Started cri-containerd-5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25.scope - libcontainer container 5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25. Feb 13 15:30:16.173804 containerd[1484]: time="2025-02-13T15:30:16.173757070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9dc7f,Uid:a23347fd-eea3-4779-8e3e-aae7bf4e936a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\"" Feb 13 15:30:16.174830 kubelet[2680]: E0213 15:30:16.174555 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:16.177836 containerd[1484]: time="2025-02-13T15:30:16.177809055Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:30:16.193695 containerd[1484]: time="2025-02-13T15:30:16.193619595Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15\"" Feb 13 15:30:16.194271 containerd[1484]: time="2025-02-13T15:30:16.194229414Z" level=info msg="StartContainer for \"54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15\"" Feb 13 15:30:16.223604 systemd[1]: Started cri-containerd-54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15.scope - libcontainer container 54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15. Feb 13 15:30:16.250761 containerd[1484]: time="2025-02-13T15:30:16.250670729Z" level=info msg="StartContainer for \"54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15\" returns successfully" Feb 13 15:30:16.261593 systemd[1]: cri-containerd-54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15.scope: Deactivated successfully. Feb 13 15:30:16.296809 containerd[1484]: time="2025-02-13T15:30:16.296734998Z" level=info msg="shim disconnected" id=54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15 namespace=k8s.io Feb 13 15:30:16.297102 containerd[1484]: time="2025-02-13T15:30:16.297058097Z" level=warning msg="cleaning up after shim disconnected" id=54b021e26b0a5ffa1555861d78c53d509b9dded35431fd79d760382a106e9a15 namespace=k8s.io Feb 13 15:30:16.297102 containerd[1484]: time="2025-02-13T15:30:16.297075771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:17.129665 kubelet[2680]: E0213 15:30:17.129633 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:17.131324 containerd[1484]: time="2025-02-13T15:30:17.131291162Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:30:17.147799 containerd[1484]: time="2025-02-13T15:30:17.147718235Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531\"" Feb 13 15:30:17.148552 containerd[1484]: time="2025-02-13T15:30:17.148495014Z" level=info msg="StartContainer for \"a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531\"" Feb 13 15:30:17.182591 systemd[1]: Started cri-containerd-a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531.scope - libcontainer container a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531. Feb 13 15:30:17.210204 containerd[1484]: time="2025-02-13T15:30:17.210145025Z" level=info msg="StartContainer for \"a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531\" returns successfully" Feb 13 15:30:17.215774 systemd[1]: cri-containerd-a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531.scope: Deactivated successfully. Feb 13 15:30:17.239036 containerd[1484]: time="2025-02-13T15:30:17.238966478Z" level=info msg="shim disconnected" id=a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531 namespace=k8s.io Feb 13 15:30:17.239036 containerd[1484]: time="2025-02-13T15:30:17.239028016Z" level=warning msg="cleaning up after shim disconnected" id=a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531 namespace=k8s.io Feb 13 15:30:17.239036 containerd[1484]: time="2025-02-13T15:30:17.239037935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:17.870412 kubelet[2680]: I0213 15:30:17.870340 2680 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:30:17Z","lastTransitionTime":"2025-02-13T15:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:30:17.989130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a72b7e0e248d7ce76a639930c38917f07b7f4a61c7c175cecb413df63f6e3531-rootfs.mount: Deactivated successfully. Feb 13 15:30:18.135960 kubelet[2680]: E0213 15:30:18.135281 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:18.137881 containerd[1484]: time="2025-02-13T15:30:18.137635391Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:30:18.157699 containerd[1484]: time="2025-02-13T15:30:18.157647721Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03\"" Feb 13 15:30:18.158174 containerd[1484]: time="2025-02-13T15:30:18.158143540Z" level=info msg="StartContainer for \"4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03\"" Feb 13 15:30:18.217595 systemd[1]: Started cri-containerd-4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03.scope - libcontainer container 4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03. Feb 13 15:30:18.261331 systemd[1]: cri-containerd-4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03.scope: Deactivated successfully. Feb 13 15:30:18.309797 containerd[1484]: time="2025-02-13T15:30:18.309735624Z" level=info msg="StartContainer for \"4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03\" returns successfully" Feb 13 15:30:18.353140 containerd[1484]: time="2025-02-13T15:30:18.353046425Z" level=info msg="shim disconnected" id=4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03 namespace=k8s.io Feb 13 15:30:18.353140 containerd[1484]: time="2025-02-13T15:30:18.353110146Z" level=warning msg="cleaning up after shim disconnected" id=4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03 namespace=k8s.io Feb 13 15:30:18.353140 containerd[1484]: time="2025-02-13T15:30:18.353120767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:18.989203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d194b95bc21cee9290c044861fff0448de896377c773ba6009bceed58932d03-rootfs.mount: Deactivated successfully. Feb 13 15:30:19.137604 kubelet[2680]: E0213 15:30:19.137569 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:19.139631 containerd[1484]: time="2025-02-13T15:30:19.139587068Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:30:19.177510 containerd[1484]: time="2025-02-13T15:30:19.177427929Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5\"" Feb 13 15:30:19.178163 containerd[1484]: time="2025-02-13T15:30:19.178118962Z" level=info msg="StartContainer for \"89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5\"" Feb 13 15:30:19.206687 systemd[1]: Started cri-containerd-89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5.scope - libcontainer container 89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5. Feb 13 15:30:19.234318 systemd[1]: cri-containerd-89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5.scope: Deactivated successfully. Feb 13 15:30:19.236384 containerd[1484]: time="2025-02-13T15:30:19.236346057Z" level=info msg="StartContainer for \"89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5\" returns successfully" Feb 13 15:30:19.261494 containerd[1484]: time="2025-02-13T15:30:19.261276978Z" level=info msg="shim disconnected" id=89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5 namespace=k8s.io Feb 13 15:30:19.261494 containerd[1484]: time="2025-02-13T15:30:19.261338335Z" level=warning msg="cleaning up after shim disconnected" id=89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5 namespace=k8s.io Feb 13 15:30:19.261494 containerd[1484]: time="2025-02-13T15:30:19.261349397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:30:19.989257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89f1994c1ad852b1e558cc1d2a33265f3d80dec8760d5ff5f9a1144207cbc0a5-rootfs.mount: Deactivated successfully. Feb 13 15:30:20.141734 kubelet[2680]: E0213 15:30:20.141697 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:20.164401 containerd[1484]: time="2025-02-13T15:30:20.164348440Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:30:20.188037 containerd[1484]: time="2025-02-13T15:30:20.187957008Z" level=info msg="CreateContainer within sandbox \"5f97cd083dcb22cbbcce6a038eb530a5020a57be1cecb909439207f7d8390d25\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13d7da886c4a487b5870875f173d984c1d9eff8de470c2fb249dae03ca7c605f\"" Feb 13 15:30:20.188625 containerd[1484]: time="2025-02-13T15:30:20.188575069Z" level=info msg="StartContainer for \"13d7da886c4a487b5870875f173d984c1d9eff8de470c2fb249dae03ca7c605f\"" Feb 13 15:30:20.215609 systemd[1]: Started cri-containerd-13d7da886c4a487b5870875f173d984c1d9eff8de470c2fb249dae03ca7c605f.scope - libcontainer container 13d7da886c4a487b5870875f173d984c1d9eff8de470c2fb249dae03ca7c605f. Feb 13 15:30:20.250065 containerd[1484]: time="2025-02-13T15:30:20.249918018Z" level=info msg="StartContainer for \"13d7da886c4a487b5870875f173d984c1d9eff8de470c2fb249dae03ca7c605f\" returns successfully" Feb 13 15:30:20.684475 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:30:21.146918 kubelet[2680]: E0213 15:30:21.146882 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:21.159289 kubelet[2680]: I0213 15:30:21.159209 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9dc7f" podStartSLOduration=6.159192592 podStartE2EDuration="6.159192592s" podCreationTimestamp="2025-02-13 15:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:30:21.158833105 +0000 UTC m=+85.365777914" watchObservedRunningTime="2025-02-13 15:30:21.159192592 +0000 UTC m=+85.366137381" Feb 13 15:30:21.873570 kubelet[2680]: E0213 15:30:21.873493 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:21.873570 kubelet[2680]: E0213 15:30:21.873565 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:22.156865 kubelet[2680]: E0213 15:30:22.156737 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:23.888177 systemd-networkd[1415]: lxc_health: Link UP Feb 13 15:30:23.889782 systemd-networkd[1415]: lxc_health: Gained carrier Feb 13 15:30:24.103485 kubelet[2680]: E0213 15:30:24.103323 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:24.161424 kubelet[2680]: E0213 15:30:24.161092 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:25.163387 kubelet[2680]: E0213 15:30:25.163333 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:25.910822 systemd-networkd[1415]: lxc_health: Gained IPv6LL Feb 13 15:30:27.874225 kubelet[2680]: E0213 15:30:27.874133 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:30:30.778969 sshd[4520]: Connection closed by 10.0.0.1 port 48426 Feb 13 15:30:30.779980 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:30.785382 systemd[1]: sshd@26-10.0.0.34:22-10.0.0.1:48426.service: Deactivated successfully. Feb 13 15:30:30.787986 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:30:30.788906 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:30:30.790076 systemd-logind[1469]: Removed session 27.