Jan 29 11:39:17.033437 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:39:17.033461 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:39:17.033476 kernel: BIOS-provided physical RAM map: Jan 29 11:39:17.033485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:39:17.033493 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:39:17.033501 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:39:17.033512 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:39:17.033574 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:39:17.033581 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:39:17.033591 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:39:17.033597 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:39:17.033603 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:39:17.033609 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:39:17.033616 kernel: NX (Execute Disable) protection: active Jan 29 11:39:17.033623 kernel: APIC: Static calls initialized Jan 29 11:39:17.033633 kernel: SMBIOS 2.8 present. Jan 29 11:39:17.033640 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:39:17.033647 kernel: Hypervisor detected: KVM Jan 29 11:39:17.033653 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:39:17.033660 kernel: kvm-clock: using sched offset of 2306658538 cycles Jan 29 11:39:17.033667 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:39:17.033674 kernel: tsc: Detected 2794.748 MHz processor Jan 29 11:39:17.033681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:39:17.033688 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:39:17.033695 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:39:17.033705 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:39:17.033712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:39:17.033719 kernel: Using GB pages for direct mapping Jan 29 11:39:17.033725 kernel: ACPI: Early table checksum verification disabled Jan 29 11:39:17.033733 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:39:17.033742 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:39:17.033752 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:39:17.033761 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:39:17.033774 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:39:17.033784 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:39:17.033791 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:39:17.033797 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:39:17.033804 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:39:17.033811 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:39:17.033818 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:39:17.033829 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:39:17.033838 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:39:17.033846 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:39:17.033853 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:39:17.033860 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:39:17.033867 kernel: No NUMA configuration found Jan 29 11:39:17.033874 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:39:17.033881 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:39:17.033891 kernel: Zone ranges: Jan 29 11:39:17.033898 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:39:17.033905 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:39:17.033912 kernel: Normal empty Jan 29 11:39:17.033920 kernel: Movable zone start for each node Jan 29 11:39:17.033927 kernel: Early memory node ranges Jan 29 11:39:17.033934 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:39:17.033943 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:39:17.033953 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:39:17.033967 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:39:17.033977 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:39:17.033987 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:39:17.033996 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:39:17.034003 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:39:17.034010 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:39:17.034017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:39:17.034024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:39:17.034031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:39:17.034041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:39:17.034048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:39:17.034056 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:39:17.034063 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:39:17.034070 kernel: TSC deadline timer available Jan 29 11:39:17.034077 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:39:17.034084 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:39:17.034091 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:39:17.034099 kernel: kvm-guest: setup PV sched yield Jan 29 11:39:17.034108 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:39:17.034115 kernel: Booting paravirtualized kernel on KVM Jan 29 11:39:17.034123 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:39:17.034130 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:39:17.034138 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:39:17.034147 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:39:17.034166 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:39:17.034176 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:39:17.034186 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:39:17.034197 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:39:17.034208 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:39:17.034215 kernel: random: crng init done Jan 29 11:39:17.034222 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:39:17.034230 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:39:17.034237 kernel: Fallback order for Node 0: 0 Jan 29 11:39:17.034244 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:39:17.034251 kernel: Policy zone: DMA32 Jan 29 11:39:17.034258 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:39:17.034268 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 136900K reserved, 0K cma-reserved) Jan 29 11:39:17.034275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:39:17.034282 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:39:17.034289 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:39:17.034297 kernel: Dynamic Preempt: voluntary Jan 29 11:39:17.034304 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:39:17.034311 kernel: rcu: RCU event tracing is enabled. Jan 29 11:39:17.034319 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:39:17.034326 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:39:17.034336 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:39:17.034343 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:39:17.034351 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:39:17.034361 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:39:17.034371 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:39:17.034381 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:39:17.034391 kernel: Console: colour VGA+ 80x25 Jan 29 11:39:17.034401 kernel: printk: console [ttyS0] enabled Jan 29 11:39:17.034411 kernel: ACPI: Core revision 20230628 Jan 29 11:39:17.034421 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:39:17.034429 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:39:17.034436 kernel: x2apic enabled Jan 29 11:39:17.034443 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:39:17.034450 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:39:17.034458 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:39:17.034465 kernel: kvm-guest: setup PV IPIs Jan 29 11:39:17.034483 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:39:17.034491 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:39:17.034498 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 11:39:17.034505 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:39:17.034513 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:39:17.034536 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:39:17.034544 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:39:17.034551 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:39:17.034559 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:39:17.034569 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:39:17.034579 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:39:17.034589 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:39:17.034600 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:39:17.034610 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:39:17.034620 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:39:17.034630 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:39:17.034638 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:39:17.034645 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:39:17.034656 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:39:17.034663 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:39:17.034671 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:39:17.034678 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:39:17.034686 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:39:17.034693 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:39:17.034701 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:39:17.034708 kernel: landlock: Up and running. Jan 29 11:39:17.034716 kernel: SELinux: Initializing. Jan 29 11:39:17.034726 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:39:17.034733 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:39:17.034741 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:39:17.034748 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:39:17.034758 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:39:17.034769 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:39:17.034780 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:39:17.034791 kernel: ... version: 0 Jan 29 11:39:17.034804 kernel: ... bit width: 48 Jan 29 11:39:17.034812 kernel: ... generic registers: 6 Jan 29 11:39:17.034819 kernel: ... value mask: 0000ffffffffffff Jan 29 11:39:17.034827 kernel: ... max period: 00007fffffffffff Jan 29 11:39:17.034835 kernel: ... fixed-purpose events: 0 Jan 29 11:39:17.034842 kernel: ... event mask: 000000000000003f Jan 29 11:39:17.034849 kernel: signal: max sigframe size: 1776 Jan 29 11:39:17.034857 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:39:17.034864 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:39:17.034872 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:39:17.034882 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:39:17.034889 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:39:17.034897 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:39:17.034904 kernel: smpboot: Max logical packages: 1 Jan 29 11:39:17.034912 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 11:39:17.034919 kernel: devtmpfs: initialized Jan 29 11:39:17.034926 kernel: x86/mm: Memory block size: 128MB Jan 29 11:39:17.034934 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:39:17.034942 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:39:17.034952 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:39:17.034959 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:39:17.034967 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:39:17.034974 kernel: audit: type=2000 audit(1738150755.612:1): state=initialized audit_enabled=0 res=1 Jan 29 11:39:17.034982 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:39:17.034989 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:39:17.034997 kernel: cpuidle: using governor menu Jan 29 11:39:17.035004 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:39:17.035012 kernel: dca service started, version 1.12.1 Jan 29 11:39:17.035025 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:39:17.035036 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:39:17.035044 kernel: PCI: Using configuration type 1 for base access Jan 29 11:39:17.035053 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:39:17.035063 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:39:17.035073 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:39:17.035080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:39:17.035088 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:39:17.035095 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:39:17.035106 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:39:17.035113 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:39:17.035121 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:39:17.035128 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:39:17.035136 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:39:17.035143 kernel: ACPI: Interpreter enabled Jan 29 11:39:17.035150 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:39:17.035166 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:39:17.035174 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:39:17.035185 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:39:17.035192 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:39:17.035200 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:39:17.035397 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:39:17.035562 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:39:17.035705 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:39:17.035716 kernel: PCI host bridge to bus 0000:00 Jan 29 11:39:17.035862 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:39:17.035980 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:39:17.036105 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:39:17.036253 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:39:17.036371 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:39:17.036481 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:39:17.036617 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:39:17.036762 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:39:17.036915 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:39:17.037064 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:39:17.037222 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:39:17.037359 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:39:17.037489 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:39:17.037659 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:39:17.037804 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:39:17.037930 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:39:17.038051 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:39:17.038199 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:39:17.038327 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:39:17.038448 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:39:17.038607 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:39:17.038770 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:39:17.038897 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:39:17.039018 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:39:17.039139 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:39:17.039287 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:39:17.039438 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:39:17.039598 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:39:17.039730 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:39:17.039865 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:39:17.039991 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:39:17.040149 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:39:17.040304 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:39:17.040320 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:39:17.040337 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:39:17.040347 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:39:17.040357 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:39:17.040364 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:39:17.040371 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:39:17.040379 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:39:17.040387 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:39:17.040394 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:39:17.040405 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:39:17.040412 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:39:17.040420 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:39:17.040427 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:39:17.040435 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:39:17.040442 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:39:17.040450 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:39:17.040457 kernel: iommu: Default domain type: Translated Jan 29 11:39:17.040465 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:39:17.040474 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:39:17.040482 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:39:17.040489 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:39:17.040497 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:39:17.040653 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:39:17.040773 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:39:17.040919 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:39:17.040932 kernel: vgaarb: loaded Jan 29 11:39:17.040939 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:39:17.040954 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:39:17.040969 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:39:17.040979 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:39:17.040989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:39:17.040998 kernel: pnp: PnP ACPI init Jan 29 11:39:17.041165 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:39:17.041178 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:39:17.041186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:39:17.041198 kernel: NET: Registered PF_INET protocol family Jan 29 11:39:17.041206 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:39:17.041214 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:39:17.041222 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:39:17.041229 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:39:17.041237 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:39:17.041245 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:39:17.041252 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:39:17.041262 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:39:17.041281 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:39:17.041292 kernel: NET: Registered PF_XDP protocol family Jan 29 11:39:17.041423 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:39:17.041571 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:39:17.041697 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:39:17.041808 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:39:17.041918 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:39:17.042058 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:39:17.042077 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:39:17.042091 kernel: Initialise system trusted keyrings Jan 29 11:39:17.042105 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:39:17.042116 kernel: Key type asymmetric registered Jan 29 11:39:17.042125 kernel: Asymmetric key parser 'x509' registered Jan 29 11:39:17.042135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:39:17.042145 kernel: io scheduler mq-deadline registered Jan 29 11:39:17.042164 kernel: io scheduler kyber registered Jan 29 11:39:17.042174 kernel: io scheduler bfq registered Jan 29 11:39:17.042189 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:39:17.042200 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:39:17.042211 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:39:17.042221 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:39:17.042231 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:39:17.042242 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:39:17.042255 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:39:17.042263 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:39:17.042273 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:39:17.042416 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:39:17.042427 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:39:17.042628 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:39:17.042748 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:39:16 UTC (1738150756) Jan 29 11:39:17.042859 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:39:17.042869 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:39:17.042877 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:39:17.042884 kernel: Segment Routing with IPv6 Jan 29 11:39:17.042896 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:39:17.042904 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:39:17.042911 kernel: Key type dns_resolver registered Jan 29 11:39:17.042919 kernel: IPI shorthand broadcast: enabled Jan 29 11:39:17.042927 kernel: sched_clock: Marking stable (966003696, 128463162)->(1178857369, -84390511) Jan 29 11:39:17.042934 kernel: registered taskstats version 1 Jan 29 11:39:17.042942 kernel: Loading compiled-in X.509 certificates Jan 29 11:39:17.042950 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:39:17.042957 kernel: Key type .fscrypt registered Jan 29 11:39:17.042967 kernel: Key type fscrypt-provisioning registered Jan 29 11:39:17.042974 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:39:17.042982 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:39:17.042989 kernel: ima: No architecture policies found Jan 29 11:39:17.042997 kernel: clk: Disabling unused clocks Jan 29 11:39:17.043005 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:39:17.043012 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:39:17.043020 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:39:17.043028 kernel: Run /init as init process Jan 29 11:39:17.043038 kernel: with arguments: Jan 29 11:39:17.043045 kernel: /init Jan 29 11:39:17.043053 kernel: with environment: Jan 29 11:39:17.043060 kernel: HOME=/ Jan 29 11:39:17.043068 kernel: TERM=linux Jan 29 11:39:17.043075 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:39:17.043085 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:39:17.043095 systemd[1]: Detected virtualization kvm. Jan 29 11:39:17.043106 systemd[1]: Detected architecture x86-64. Jan 29 11:39:17.043114 systemd[1]: Running in initrd. Jan 29 11:39:17.043122 systemd[1]: No hostname configured, using default hostname. Jan 29 11:39:17.043129 systemd[1]: Hostname set to . Jan 29 11:39:17.043138 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:39:17.043146 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:39:17.043154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:39:17.043170 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:39:17.043182 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:39:17.043202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:39:17.043213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:39:17.043221 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:39:17.043233 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:39:17.043246 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:39:17.043255 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:39:17.043263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:39:17.043271 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:39:17.043279 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:39:17.043287 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:39:17.043296 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:39:17.043307 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:39:17.043321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:39:17.043330 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:39:17.043338 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:39:17.043346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:39:17.043354 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:39:17.043362 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:39:17.043371 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:39:17.043379 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:39:17.043390 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:39:17.043398 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:39:17.043406 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:39:17.043415 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:39:17.043423 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:39:17.043431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:39:17.043440 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:39:17.043448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:39:17.043474 systemd-journald[192]: Collecting audit messages is disabled. Jan 29 11:39:17.043496 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:39:17.043507 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:39:17.043528 systemd-journald[192]: Journal started Jan 29 11:39:17.043561 systemd-journald[192]: Runtime Journal (/run/log/journal/5a0a9f7aa5b7414cb3b35bac4739c47b) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:39:17.044723 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:39:17.048877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:39:17.089797 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:39:17.089827 kernel: Bridge firewalling registered Jan 29 11:39:17.049063 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 11:39:17.076611 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 11:39:17.088247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:39:17.097802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:39:17.100776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:39:17.104600 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:39:17.107531 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:39:17.110355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:39:17.114789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:39:17.122929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:39:17.130803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:39:17.132972 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:39:17.133762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:39:17.135762 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:39:17.153343 dracut-cmdline[229]: dracut-dracut-053 Jan 29 11:39:17.156971 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:39:17.176267 systemd-resolved[224]: Positive Trust Anchors: Jan 29 11:39:17.176282 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:39:17.176313 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:39:17.190485 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 29 11:39:17.191881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:39:17.193405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:39:17.269562 kernel: SCSI subsystem initialized Jan 29 11:39:17.280560 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:39:17.291551 kernel: iscsi: registered transport (tcp) Jan 29 11:39:17.319930 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:39:17.320004 kernel: QLogic iSCSI HBA Driver Jan 29 11:39:17.369385 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:39:17.382761 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:39:17.411619 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:39:17.411709 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:39:17.411725 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:39:17.459562 kernel: raid6: avx2x4 gen() 21883 MB/s Jan 29 11:39:17.476548 kernel: raid6: avx2x2 gen() 24745 MB/s Jan 29 11:39:17.493653 kernel: raid6: avx2x1 gen() 24043 MB/s Jan 29 11:39:17.493674 kernel: raid6: using algorithm avx2x2 gen() 24745 MB/s Jan 29 11:39:17.524562 kernel: raid6: .... xor() 18667 MB/s, rmw enabled Jan 29 11:39:17.524630 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:39:17.555547 kernel: xor: automatically using best checksumming function avx Jan 29 11:39:17.719558 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:39:17.733587 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:39:17.740698 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:39:17.756568 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 29 11:39:17.761554 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:39:17.769672 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:39:17.784457 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 29 11:39:17.815164 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:39:17.826715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:39:17.890310 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:39:17.898936 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:39:17.911816 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:39:17.915495 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:39:17.918681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:39:17.921294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:39:17.926555 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:39:17.956673 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:39:17.956853 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:39:17.956867 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:39:17.956882 kernel: GPT:9289727 != 19775487 Jan 29 11:39:17.956893 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:39:17.956913 kernel: GPT:9289727 != 19775487 Jan 29 11:39:17.956923 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:39:17.956933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:39:17.929752 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:39:17.940635 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:39:17.994124 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:39:18.009728 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:39:18.009753 kernel: libata version 3.00 loaded. Jan 29 11:39:18.009768 kernel: AES CTR mode by8 optimization enabled Jan 29 11:39:17.994312 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:39:18.006891 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:39:18.009083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:39:18.020258 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:39:18.083903 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:39:18.084240 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:39:18.084409 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:39:18.084952 kernel: scsi host0: ahci Jan 29 11:39:18.085107 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (457) Jan 29 11:39:18.085118 kernel: scsi host1: ahci Jan 29 11:39:18.085281 kernel: scsi host2: ahci Jan 29 11:39:18.085429 kernel: scsi host3: ahci Jan 29 11:39:18.085595 kernel: scsi host4: ahci Jan 29 11:39:18.085741 kernel: scsi host5: ahci Jan 29 11:39:18.085882 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:39:18.085894 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Jan 29 11:39:18.085905 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:39:18.085915 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:39:18.085929 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:39:18.085939 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:39:18.085949 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:39:18.009296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:39:18.013222 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:39:18.026784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:39:18.087278 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:39:18.128593 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:39:18.130230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:39:18.136420 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:39:18.141758 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:39:18.148081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:39:18.161654 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:39:18.164083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:39:18.183093 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:39:18.776564 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:39:18.776652 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:39:18.777557 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:39:18.778555 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:39:18.779560 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:39:18.779587 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:39:18.781170 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:39:18.781195 kernel: ata3.00: applying bridge limits Jan 29 11:39:18.782544 kernel: ata3.00: configured for UDMA/100 Jan 29 11:39:18.784554 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:39:18.848554 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:39:18.862332 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:39:18.862345 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:39:18.882258 disk-uuid[567]: Primary Header is updated. Jan 29 11:39:18.882258 disk-uuid[567]: Secondary Entries is updated. Jan 29 11:39:18.882258 disk-uuid[567]: Secondary Header is updated. Jan 29 11:39:18.886065 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:39:19.923396 disk-uuid[577]: The operation has completed successfully. Jan 29 11:39:19.924905 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:39:19.951587 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:39:19.951701 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:39:19.975672 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:39:19.978568 sh[593]: Success Jan 29 11:39:19.990559 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:39:20.021970 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:39:20.035059 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:39:20.037725 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:39:20.050828 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:39:20.050869 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:39:20.050879 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:39:20.051868 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:39:20.052614 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:39:20.057198 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:39:20.058771 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:39:20.059570 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:39:20.062675 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:39:20.075490 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:39:20.075552 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:39:20.075565 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:39:20.078823 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:39:20.088202 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:39:20.090027 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:39:20.100676 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:39:20.105678 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:39:20.302153 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:39:20.308564 ignition[691]: Ignition 2.20.0 Jan 29 11:39:20.308574 ignition[691]: Stage: fetch-offline Jan 29 11:39:20.312790 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:39:20.308634 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:39:20.308647 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:39:20.308768 ignition[691]: parsed url from cmdline: "" Jan 29 11:39:20.308773 ignition[691]: no config URL provided Jan 29 11:39:20.308780 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:39:20.308791 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:39:20.308826 ignition[691]: op(1): [started] loading QEMU firmware config module Jan 29 11:39:20.308833 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:39:20.322431 ignition[691]: op(1): [finished] loading QEMU firmware config module Jan 29 11:39:20.338480 systemd-networkd[780]: lo: Link UP Jan 29 11:39:20.338490 systemd-networkd[780]: lo: Gained carrier Jan 29 11:39:20.340125 systemd-networkd[780]: Enumeration completed Jan 29 11:39:20.340221 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:39:20.340505 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:39:20.340510 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:39:20.340933 systemd[1]: Reached target network.target - Network. Jan 29 11:39:20.341615 systemd-networkd[780]: eth0: Link UP Jan 29 11:39:20.341619 systemd-networkd[780]: eth0: Gained carrier Jan 29 11:39:20.341627 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:39:20.366560 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:39:20.376498 ignition[691]: parsing config with SHA512: 97ad3e1a5d3cd890496a06d3e42583454a11a388a60a732087bd83d14e21321069c5c74d3c604562f0c19994fdb9095c4ea6e2793243451996ed20a1d866f29c Jan 29 11:39:20.384147 unknown[691]: fetched base config from "system" Jan 29 11:39:20.384158 unknown[691]: fetched user config from "qemu" Jan 29 11:39:20.384579 ignition[691]: fetch-offline: fetch-offline passed Jan 29 11:39:20.384655 ignition[691]: Ignition finished successfully Jan 29 11:39:20.386874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:39:20.388266 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:39:20.394679 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:39:20.414473 ignition[785]: Ignition 2.20.0 Jan 29 11:39:20.414484 ignition[785]: Stage: kargs Jan 29 11:39:20.414673 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:39:20.414687 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:39:20.415575 ignition[785]: kargs: kargs passed Jan 29 11:39:20.418691 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:39:20.415625 ignition[785]: Ignition finished successfully Jan 29 11:39:20.426676 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:39:20.515898 ignition[793]: Ignition 2.20.0 Jan 29 11:39:20.515908 ignition[793]: Stage: disks Jan 29 11:39:20.516085 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:39:20.516096 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:39:20.518993 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:39:20.516928 ignition[793]: disks: disks passed Jan 29 11:39:20.521004 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:39:20.516968 ignition[793]: Ignition finished successfully Jan 29 11:39:20.523034 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:39:20.525110 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:39:20.527363 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:39:20.528538 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:39:20.539666 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:39:20.550501 systemd-resolved[224]: Detected conflict on linux IN A 10.0.0.147 Jan 29 11:39:20.550532 systemd-resolved[224]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jan 29 11:39:20.553994 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:39:20.565326 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:39:20.576704 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:39:20.693550 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:39:20.694370 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:39:20.695948 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:39:20.711637 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:39:20.713339 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:39:20.714232 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:39:20.714275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:39:20.714298 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:39:20.726022 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:39:20.729638 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Jan 29 11:39:20.729706 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:39:20.729574 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:39:20.734890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:39:20.734910 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:39:20.736566 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:39:20.739472 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:39:20.774941 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:39:20.780972 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:39:20.786812 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:39:20.792065 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:39:20.885910 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:39:20.897616 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:39:20.898674 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:39:20.910560 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:39:20.929234 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:39:20.969326 ignition[928]: INFO : Ignition 2.20.0 Jan 29 11:39:20.969326 ignition[928]: INFO : Stage: mount Jan 29 11:39:20.971047 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:39:20.971047 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:39:20.971047 ignition[928]: INFO : mount: mount passed Jan 29 11:39:20.971047 ignition[928]: INFO : Ignition finished successfully Jan 29 11:39:20.976752 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:39:20.985672 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:39:21.050363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:39:21.063715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:39:21.085564 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Jan 29 11:39:21.085599 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:39:21.087701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:39:21.087723 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:39:21.091561 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:39:21.093364 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:39:21.117487 ignition[957]: INFO : Ignition 2.20.0 Jan 29 11:39:21.117487 ignition[957]: INFO : Stage: files Jan 29 11:39:21.119629 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:39:21.119629 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:39:21.119629 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:39:21.123452 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:39:21.123452 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:39:21.129347 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:39:21.131351 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:39:21.133467 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 11:39:21.134925 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:39:21.134925 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:39:21.134925 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:39:21.141268 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:39:21.141268 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:39:21.186002 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:39:21.350405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:39:21.350405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:39:21.354108 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:39:21.712816 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 29 11:39:21.846071 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:39:21.847898 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:39:21.849623 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:39:21.851342 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:39:21.853132 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:39:21.855043 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:39:21.857139 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:39:21.859093 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:39:21.861108 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:39:21.863297 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:39:21.865191 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:39:21.866926 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:39:21.869559 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:39:21.871948 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:39:21.874088 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:39:22.169733 systemd-networkd[780]: eth0: Gained IPv6LL Jan 29 11:39:22.194495 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 29 11:39:22.770697 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:39:22.770697 ignition[957]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 29 11:39:22.774478 ignition[957]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:39:22.823463 ignition[957]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:39:22.829043 ignition[957]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:39:22.831038 ignition[957]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:39:22.831038 ignition[957]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:39:22.831038 ignition[957]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:39:22.831038 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:39:22.831038 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:39:22.831038 ignition[957]: INFO : files: files passed Jan 29 11:39:22.831038 ignition[957]: INFO : Ignition finished successfully Jan 29 11:39:22.834056 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:39:22.844787 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:39:22.846186 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:39:22.854377 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:39:22.854653 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:39:22.859917 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:39:22.862880 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:39:22.862880 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:39:22.866431 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:39:22.867841 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:39:22.868756 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:39:22.878828 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:39:22.905202 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:39:22.905347 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:39:22.906193 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:39:22.909313 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:39:22.909883 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:39:22.910738 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:39:22.932595 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:39:22.940743 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:39:22.951287 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:39:22.952786 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:39:22.955404 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:39:22.957762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:39:22.957903 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:39:22.960453 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:39:22.962459 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:39:22.964867 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:39:22.967249 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:39:22.969627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:39:22.972169 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:39:22.974646 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:39:22.977309 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:39:22.979681 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:39:22.982329 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:39:22.984368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:39:22.984525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:39:22.987115 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:39:22.988983 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:39:22.991424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:39:22.991605 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:39:22.994123 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:39:22.994251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:39:22.996844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:39:22.996970 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:39:22.999331 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:39:23.001277 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:39:23.001419 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:39:23.004256 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:39:23.006374 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:39:23.008580 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:39:23.008695 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:39:23.010805 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:39:23.010914 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:39:23.013002 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:39:23.013132 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:39:23.015252 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:39:23.015374 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:39:23.026717 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:39:23.028948 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:39:23.029117 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:39:23.032819 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:39:23.034433 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:39:23.034601 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:39:23.036905 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:39:23.037155 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:39:23.071858 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:39:23.073789 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:39:23.075004 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:39:23.084534 ignition[1012]: INFO : Ignition 2.20.0 Jan 29 11:39:23.084534 ignition[1012]: INFO : Stage: umount Jan 29 11:39:23.086414 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:39:23.086414 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:39:23.086414 ignition[1012]: INFO : umount: umount passed Jan 29 11:39:23.086414 ignition[1012]: INFO : Ignition finished successfully Jan 29 11:39:23.093023 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:39:23.093169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:39:23.094072 systemd[1]: Stopped target network.target - Network. Jan 29 11:39:23.098611 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:39:23.099773 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:39:23.101924 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:39:23.103061 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:39:23.105240 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:39:23.105296 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:39:23.108499 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:39:23.109601 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:39:23.112246 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:39:23.115092 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:39:23.120593 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 29 11:39:23.122705 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:39:23.122908 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:39:23.123841 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:39:23.123890 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:39:23.136639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:39:23.137094 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:39:23.137156 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:39:23.137568 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:39:23.138218 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:39:23.138344 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:39:23.144914 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:39:23.144981 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:39:23.145855 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:39:23.145902 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:39:23.148430 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:39:23.148476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:39:23.168969 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:39:23.169178 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:39:23.170353 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:39:23.170429 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:39:23.172932 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:39:23.172975 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:39:23.173231 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:39:23.173278 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:39:23.174111 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:39:23.174157 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:39:23.180938 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:39:23.181003 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:39:23.185893 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:39:23.186435 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:39:23.186502 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:39:23.187011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:39:23.187068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:39:23.197584 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:39:23.197739 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:39:23.201187 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:39:23.201323 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:39:23.359052 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:39:23.359195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:39:23.360378 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:39:23.362327 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:39:23.362391 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:39:23.378786 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:39:23.386980 systemd[1]: Switching root. Jan 29 11:39:23.421130 systemd-journald[192]: Journal stopped Jan 29 11:39:24.764738 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 29 11:39:24.764809 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:39:24.764833 kernel: SELinux: policy capability open_perms=1 Jan 29 11:39:24.764848 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:39:24.764871 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:39:24.764890 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:39:24.764905 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:39:24.764919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:39:24.764934 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:39:24.766235 kernel: audit: type=1403 audit(1738150763.975:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:39:24.766266 systemd[1]: Successfully loaded SELinux policy in 49.075ms. Jan 29 11:39:24.766290 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.226ms. Jan 29 11:39:24.766309 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:39:24.766325 systemd[1]: Detected virtualization kvm. Jan 29 11:39:24.766345 systemd[1]: Detected architecture x86-64. Jan 29 11:39:24.766361 systemd[1]: Detected first boot. Jan 29 11:39:24.766383 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:39:24.766399 zram_generator::config[1073]: No configuration found. Jan 29 11:39:24.766416 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:39:24.766432 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:39:24.766448 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:39:24.766465 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:39:24.766485 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:39:24.766501 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:39:24.766532 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:39:24.766550 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:39:24.766567 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:39:24.766583 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:39:24.766599 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:39:24.766616 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:39:24.766632 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:39:24.766652 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:39:24.766668 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:39:24.766684 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:39:24.766701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:39:24.766717 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:39:24.766733 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:39:24.766757 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:39:24.766777 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:39:24.766793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:39:24.766813 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:39:24.766829 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:39:24.766845 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:39:24.766861 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:39:24.766877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:39:24.766893 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:39:24.766911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:39:24.766927 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:39:24.766957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:39:24.766974 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:39:24.766989 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:39:24.767005 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:39:24.767021 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:39:24.767037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:24.767053 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:39:24.767069 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:39:24.767086 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:39:24.767104 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:39:24.767121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:39:24.767137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:39:24.767153 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:39:24.767169 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:39:24.767184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:39:24.767200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:39:24.767215 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:39:24.767234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:39:24.767251 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:39:24.767269 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:39:24.767286 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:39:24.767302 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:39:24.767318 kernel: loop: module loaded Jan 29 11:39:24.767333 kernel: fuse: init (API version 7.39) Jan 29 11:39:24.767349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:39:24.767365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:39:24.767384 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:39:24.767400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:39:24.767416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:24.767432 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:39:24.768822 systemd-journald[1151]: Collecting audit messages is disabled. Jan 29 11:39:24.768870 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:39:24.768890 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:39:24.768912 kernel: ACPI: bus type drm_connector registered Jan 29 11:39:24.768930 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:39:24.768958 systemd-journald[1151]: Journal started Jan 29 11:39:24.768987 systemd-journald[1151]: Runtime Journal (/run/log/journal/5a0a9f7aa5b7414cb3b35bac4739c47b) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:39:24.771583 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:39:24.773545 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:39:24.775370 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:39:24.776998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:39:24.778879 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:39:24.779165 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:39:24.780861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:39:24.781139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:39:24.782820 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:39:24.783102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:39:24.784670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:39:24.784958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:39:24.786995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:39:24.787264 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:39:24.788876 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:39:24.789184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:39:24.790921 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:39:24.792811 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:39:24.794917 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:39:24.802800 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:39:24.810465 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:39:24.820605 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:39:24.823201 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:39:24.824572 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:39:24.828708 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:39:24.831501 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:39:24.833261 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:39:24.835402 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:39:24.836716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:39:24.841684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:39:24.847938 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:39:24.853708 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:39:24.855448 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:39:24.866225 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:39:24.866432 systemd-journald[1151]: Time spent on flushing to /var/log/journal/5a0a9f7aa5b7414cb3b35bac4739c47b is 18.132ms for 947 entries. Jan 29 11:39:24.866432 systemd-journald[1151]: System Journal (/var/log/journal/5a0a9f7aa5b7414cb3b35bac4739c47b) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:39:24.893220 systemd-journald[1151]: Received client request to flush runtime journal. Jan 29 11:39:24.869777 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:39:24.874593 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:39:24.887730 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:39:24.892330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:39:24.895323 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:39:24.902957 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 29 11:39:24.902980 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 29 11:39:24.903464 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:39:24.911102 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:39:24.919668 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:39:24.952038 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:39:24.965671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:39:24.982280 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 29 11:39:24.982301 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 29 11:39:24.988702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:39:25.693145 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:39:25.709844 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:39:25.733689 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jan 29 11:39:25.749214 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:39:25.762100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:39:25.776961 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:39:25.789469 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 11:39:25.907548 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1252) Jan 29 11:39:25.949550 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:39:25.961546 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:39:25.961617 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:39:25.967701 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:39:25.979945 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:39:25.982805 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:39:25.983040 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:39:26.015599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:39:26.046315 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:39:26.051907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:39:26.093451 systemd-networkd[1246]: lo: Link UP Jan 29 11:39:26.094250 systemd-networkd[1246]: lo: Gained carrier Jan 29 11:39:26.098037 systemd-networkd[1246]: Enumeration completed Jan 29 11:39:26.098809 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:39:26.100083 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:39:26.100092 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:39:26.101074 systemd-networkd[1246]: eth0: Link UP Jan 29 11:39:26.101079 systemd-networkd[1246]: eth0: Gained carrier Jan 29 11:39:26.101090 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:39:26.103898 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:39:26.116033 kernel: kvm_amd: TSC scaling supported Jan 29 11:39:26.116081 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:39:26.116094 kernel: kvm_amd: Nested Paging enabled Jan 29 11:39:26.116107 kernel: kvm_amd: LBR virtualization supported Jan 29 11:39:26.116656 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:39:26.118047 kernel: kvm_amd: Virtual GIF supported Jan 29 11:39:26.130576 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:39:26.140773 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:39:26.175427 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:39:26.182402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:39:26.196899 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:39:26.207912 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:39:26.241546 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:39:26.243635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:39:26.254919 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:39:26.261068 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:39:26.301404 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:39:26.303297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:39:26.304741 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:39:26.304764 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:39:26.306077 systemd[1]: Reached target machines.target - Containers. Jan 29 11:39:26.308271 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:39:26.317630 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:39:26.320649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:39:26.321927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:39:26.323067 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:39:26.325842 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:39:26.330173 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:39:26.334484 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:39:26.341548 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 11:39:26.347586 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:39:26.358488 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:39:26.359418 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:39:26.368549 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:39:26.387557 kernel: loop1: detected capacity change from 0 to 140992 Jan 29 11:39:26.421556 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 11:39:26.458549 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 11:39:26.467554 kernel: loop4: detected capacity change from 0 to 140992 Jan 29 11:39:26.476556 kernel: loop5: detected capacity change from 0 to 138184 Jan 29 11:39:26.485679 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:39:26.486312 (sd-merge)[1310]: Merged extensions into '/usr'. Jan 29 11:39:26.490984 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:39:26.491003 systemd[1]: Reloading... Jan 29 11:39:26.541700 zram_generator::config[1335]: No configuration found. Jan 29 11:39:26.600759 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:39:26.710592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:39:26.800714 systemd[1]: Reloading finished in 309 ms. Jan 29 11:39:26.823023 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:39:26.825015 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:39:26.843852 systemd[1]: Starting ensure-sysext.service... Jan 29 11:39:26.847617 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:39:26.857105 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:39:26.857126 systemd[1]: Reloading... Jan 29 11:39:26.877229 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:39:26.877731 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:39:26.879055 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:39:26.879494 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 29 11:39:26.879618 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 29 11:39:26.884069 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:39:26.884089 systemd-tmpfiles[1383]: Skipping /boot Jan 29 11:39:26.901609 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:39:26.901629 systemd-tmpfiles[1383]: Skipping /boot Jan 29 11:39:26.915651 zram_generator::config[1415]: No configuration found. Jan 29 11:39:27.053132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:39:27.123713 systemd[1]: Reloading finished in 266 ms. Jan 29 11:39:27.153263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:39:27.159967 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:39:27.162693 systemd-networkd[1246]: eth0: Gained IPv6LL Jan 29 11:39:27.163130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:39:27.166557 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:39:27.171163 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:39:27.175457 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:39:27.177390 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:39:27.188400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:27.188713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:39:27.190774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:39:27.202836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:39:27.207420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:39:27.209174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:39:27.209337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:27.211220 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:39:27.213173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:39:27.213420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:39:27.215857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:39:27.216134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:39:27.221082 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:39:27.221371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:39:27.227948 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:39:27.233570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:27.234042 augenrules[1498]: No rules Jan 29 11:39:27.233875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:39:27.242736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:39:27.247104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:39:27.252632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:39:27.253844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:39:27.256924 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:39:27.258416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:27.259754 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:39:27.260088 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:39:27.262070 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:39:27.264239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:39:27.264495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:39:27.266261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:39:27.266514 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:39:27.268892 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:39:27.269126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:39:27.279484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:27.286029 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:39:27.286994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:39:27.291685 systemd-resolved[1461]: Positive Trust Anchors: Jan 29 11:39:27.292027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:39:27.292257 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:39:27.292354 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:39:27.296809 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:39:27.297409 systemd-resolved[1461]: Defaulting to hostname 'linux'. Jan 29 11:39:27.300780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:39:27.306798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:39:27.308443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:39:27.308833 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:39:27.308957 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:39:27.310395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:39:27.312647 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:39:27.317337 augenrules[1519]: /sbin/augenrules: No change Jan 29 11:39:27.317938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:39:27.318239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:39:27.320233 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:39:27.320473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:39:27.322329 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:39:27.322827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:39:27.324802 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:39:27.325226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:39:27.327898 augenrules[1546]: No rules Jan 29 11:39:27.329093 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:39:27.329507 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:39:27.331355 systemd[1]: Finished ensure-sysext.service. Jan 29 11:39:27.340190 systemd[1]: Reached target network.target - Network. Jan 29 11:39:27.341202 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:39:27.342291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:39:27.343551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:39:27.343628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:39:27.359809 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:39:27.426898 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:39:27.428069 systemd-timesyncd[1560]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:39:27.428114 systemd-timesyncd[1560]: Initial clock synchronization to Wed 2025-01-29 11:39:27.505230 UTC. Jan 29 11:39:27.429000 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:39:27.430316 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:39:27.431782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:39:27.433275 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:39:27.434655 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:39:27.434695 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:39:27.435700 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:39:27.437118 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:39:27.438469 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:39:27.439933 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:39:27.441931 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:39:27.445916 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:39:27.449063 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:39:27.457551 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:39:27.458952 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:39:27.460062 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:39:27.461325 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:39:27.461380 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:39:27.461417 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:39:27.463346 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:39:27.466208 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:39:27.469385 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:39:27.474677 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:39:27.477824 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:39:27.479151 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:39:27.481508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:39:27.484707 jq[1568]: false Jan 29 11:39:27.487673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:39:27.495707 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:39:27.499610 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:39:27.502604 extend-filesystems[1570]: Found loop3 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found loop4 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found loop5 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found sr0 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda1 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda2 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda3 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found usr Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda4 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda6 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda7 Jan 29 11:39:27.503992 extend-filesystems[1570]: Found vda9 Jan 29 11:39:27.503992 extend-filesystems[1570]: Checking size of /dev/vda9 Jan 29 11:39:27.504696 dbus-daemon[1566]: [system] SELinux support is enabled Jan 29 11:39:27.505984 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:39:27.512949 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:39:27.535921 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:39:27.538135 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:39:27.542653 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:39:27.546308 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:39:27.548756 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:39:27.555014 extend-filesystems[1570]: Resized partition /dev/vda9 Jan 29 11:39:27.557018 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:39:27.557347 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:39:27.559776 jq[1600]: true Jan 29 11:39:27.563023 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:39:27.563344 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:39:27.569474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:39:27.569936 extend-filesystems[1604]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:39:27.582012 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1244) Jan 29 11:39:27.582094 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:39:27.585983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:39:27.589718 update_engine[1597]: I20250129 11:39:27.586382 1597 main.cc:92] Flatcar Update Engine starting Jan 29 11:39:27.586330 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:39:27.592776 update_engine[1597]: I20250129 11:39:27.591679 1597 update_check_scheduler.cc:74] Next update check in 10m28s Jan 29 11:39:27.604100 (ntainerd)[1614]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:39:27.607555 jq[1613]: true Jan 29 11:39:27.611945 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:39:27.612311 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:39:27.643376 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:39:27.646048 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:39:27.646148 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:39:27.646171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:39:27.648011 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:39:27.648089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:39:27.650452 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:39:27.659704 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:39:27.679490 sshd_keygen[1607]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:39:27.703787 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:39:27.736744 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:39:27.745680 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:39:27.746150 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:39:27.749345 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:39:27.779072 systemd-logind[1593]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:39:27.779101 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:39:27.780262 systemd-logind[1593]: New seat seat0. Jan 29 11:39:27.780372 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:39:27.793608 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:39:27.800835 tar[1610]: linux-amd64/helm Jan 29 11:39:27.801783 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:39:27.801844 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:39:27.803789 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:39:27.805098 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:39:27.900547 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:39:27.993837 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:39:27.993837 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:39:27.993837 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:39:28.001377 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Jan 29 11:39:27.995999 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:39:27.997419 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:39:28.007558 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:39:28.009478 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:39:28.011978 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:39:28.091300 containerd[1614]: time="2025-01-29T11:39:28.091033481Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:39:28.121118 containerd[1614]: time="2025-01-29T11:39:28.121075114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126084 containerd[1614]: time="2025-01-29T11:39:28.125826601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126084 containerd[1614]: time="2025-01-29T11:39:28.125870118Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:39:28.126084 containerd[1614]: time="2025-01-29T11:39:28.125890177Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:39:28.126252 containerd[1614]: time="2025-01-29T11:39:28.126098193Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:39:28.126252 containerd[1614]: time="2025-01-29T11:39:28.126115572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126252 containerd[1614]: time="2025-01-29T11:39:28.126185722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126252 containerd[1614]: time="2025-01-29T11:39:28.126198569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126497 containerd[1614]: time="2025-01-29T11:39:28.126463667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126497 containerd[1614]: time="2025-01-29T11:39:28.126482148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126497 containerd[1614]: time="2025-01-29T11:39:28.126496280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126596 containerd[1614]: time="2025-01-29T11:39:28.126506426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126644 containerd[1614]: time="2025-01-29T11:39:28.126619477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:39:28.126894 containerd[1614]: time="2025-01-29T11:39:28.126862907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:39:28.127041 containerd[1614]: time="2025-01-29T11:39:28.127026506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:39:28.127090 containerd[1614]: time="2025-01-29T11:39:28.127041618Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:39:28.127192 containerd[1614]: time="2025-01-29T11:39:28.127158837Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:39:28.127241 containerd[1614]: time="2025-01-29T11:39:28.127223060Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:39:28.265437 containerd[1614]: time="2025-01-29T11:39:28.265223151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:39:28.265437 containerd[1614]: time="2025-01-29T11:39:28.265329546Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:39:28.265437 containerd[1614]: time="2025-01-29T11:39:28.265350981Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:39:28.265437 containerd[1614]: time="2025-01-29T11:39:28.265371354Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:39:28.265437 containerd[1614]: time="2025-01-29T11:39:28.265401447Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:39:28.265714 containerd[1614]: time="2025-01-29T11:39:28.265686665Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:39:28.347970 containerd[1614]: time="2025-01-29T11:39:28.347900053Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.348987888Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349043918Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349071817Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349095861Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349115161Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349138386Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349166225Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349203834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349221293Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349240878Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349259328Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349287166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349306193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350542 containerd[1614]: time="2025-01-29T11:39:28.349342548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349360504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349377275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349397830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349412962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349429643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349446172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349465725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349478086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349492876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349509172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349561003Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349596114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349617478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.350927 containerd[1614]: time="2025-01-29T11:39:28.349632540Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349696946Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349723125Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349738319Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349753603Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349765418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349781067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349796837Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:39:28.351492 containerd[1614]: time="2025-01-29T11:39:28.349810078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.350131924Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.350180519Z" level=info msg="Connect containerd service" Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.350270426Z" level=info msg="using legacy CRI server" Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.350283172Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.350463036Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.351185234Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.351681422Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:39:28.351734 containerd[1614]: time="2025-01-29T11:39:28.351735176Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:39:28.355052 containerd[1614]: time="2025-01-29T11:39:28.354975747Z" level=info msg="Start subscribing containerd event" Jan 29 11:39:28.355120 containerd[1614]: time="2025-01-29T11:39:28.355075456Z" level=info msg="Start recovering state" Jan 29 11:39:28.355180 containerd[1614]: time="2025-01-29T11:39:28.355161114Z" level=info msg="Start event monitor" Jan 29 11:39:28.355215 containerd[1614]: time="2025-01-29T11:39:28.355186119Z" level=info msg="Start snapshots syncer" Jan 29 11:39:28.355215 containerd[1614]: time="2025-01-29T11:39:28.355198066Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:39:28.355215 containerd[1614]: time="2025-01-29T11:39:28.355207201Z" level=info msg="Start streaming server" Jan 29 11:39:28.356282 containerd[1614]: time="2025-01-29T11:39:28.355310429Z" level=info msg="containerd successfully booted in 0.266378s" Jan 29 11:39:28.355456 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:39:28.465873 tar[1610]: linux-amd64/LICENSE Jan 29 11:39:28.465999 tar[1610]: linux-amd64/README.md Jan 29 11:39:28.484636 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:39:29.161795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:39:29.163602 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:39:29.164957 systemd[1]: Startup finished in 8.277s (kernel) + 5.236s (userspace) = 13.514s. Jan 29 11:39:29.179460 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:39:29.843539 kubelet[1700]: E0129 11:39:29.843471 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:39:29.848195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:39:29.848517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:39:33.132293 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:39:33.142859 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:50884.service - OpenSSH per-connection server daemon (10.0.0.1:50884). Jan 29 11:39:33.190883 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 50884 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:39:33.193149 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:39:33.202402 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:39:33.210771 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:39:33.212630 systemd-logind[1593]: New session 1 of user core. Jan 29 11:39:33.225868 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:39:33.234047 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:39:33.237177 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:39:33.349147 systemd[1720]: Queued start job for default target default.target. Jan 29 11:39:33.349549 systemd[1720]: Created slice app.slice - User Application Slice. Jan 29 11:39:33.349571 systemd[1720]: Reached target paths.target - Paths. Jan 29 11:39:33.349584 systemd[1720]: Reached target timers.target - Timers. Jan 29 11:39:33.359603 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:39:33.366633 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:39:33.366714 systemd[1720]: Reached target sockets.target - Sockets. Jan 29 11:39:33.366736 systemd[1720]: Reached target basic.target - Basic System. Jan 29 11:39:33.366783 systemd[1720]: Reached target default.target - Main User Target. Jan 29 11:39:33.366824 systemd[1720]: Startup finished in 122ms. Jan 29 11:39:33.367916 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:39:33.370839 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:39:33.427721 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:50898.service - OpenSSH per-connection server daemon (10.0.0.1:50898). Jan 29 11:39:33.461936 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 50898 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:39:33.463623 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:39:33.467710 systemd-logind[1593]: New session 2 of user core. Jan 29 11:39:33.477784 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:39:33.532736 sshd[1735]: Connection closed by 10.0.0.1 port 50898 Jan 29 11:39:33.533081 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 29 11:39:33.547807 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:50902.service - OpenSSH per-connection server daemon (10.0.0.1:50902). Jan 29 11:39:33.548450 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:50898.service: Deactivated successfully. Jan 29 11:39:33.550488 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:39:33.551335 systemd-logind[1593]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:39:33.552729 systemd-logind[1593]: Removed session 2. Jan 29 11:39:33.581362 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 50902 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:39:33.582911 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:39:33.587139 systemd-logind[1593]: New session 3 of user core. Jan 29 11:39:33.597137 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:39:33.647432 sshd[1743]: Connection closed by 10.0.0.1 port 50902 Jan 29 11:39:33.647935 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 29 11:39:33.660768 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:50918.service - OpenSSH per-connection server daemon (10.0.0.1:50918). Jan 29 11:39:33.661254 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:50902.service: Deactivated successfully. Jan 29 11:39:33.663738 systemd-logind[1593]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:39:33.664882 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:39:33.665538 systemd-logind[1593]: Removed session 3. Jan 29 11:39:33.698947 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 50918 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:39:33.700742 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:39:33.704796 systemd-logind[1593]: New session 4 of user core. Jan 29 11:39:33.714779 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:39:33.771739 sshd[1751]: Connection closed by 10.0.0.1 port 50918 Jan 29 11:39:33.772153 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jan 29 11:39:33.782781 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:50934.service - OpenSSH per-connection server daemon (10.0.0.1:50934). Jan 29 11:39:33.783270 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:50918.service: Deactivated successfully. Jan 29 11:39:33.785919 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:39:33.787397 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:39:33.788670 systemd-logind[1593]: Removed session 4. Jan 29 11:39:33.815958 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 50934 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:39:33.817820 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:39:33.822708 systemd-logind[1593]: New session 5 of user core. Jan 29 11:39:33.833978 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:39:33.894101 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:39:33.894436 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:39:33.919957 sudo[1760]: pam_unix(sudo:session): session closed for user root Jan 29 11:39:33.921450 sshd[1759]: Connection closed by 10.0.0.1 port 50934 Jan 29 11:39:33.921999 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jan 29 11:39:33.930749 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:50948.service - OpenSSH per-connection server daemon (10.0.0.1:50948). Jan 29 11:39:33.931210 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:50934.service: Deactivated successfully. Jan 29 11:39:33.933381 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:39:33.935032 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:39:33.936094 systemd-logind[1593]: Removed session 5. Jan 29 11:39:33.972277 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 50948 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:39:33.973820 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:39:33.978000 systemd-logind[1593]: New session 6 of user core. Jan 29 11:39:33.987797 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:39:34.043422 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:39:34.043778 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:39:34.048872 sudo[1770]: pam_unix(sudo:session): session closed for user root Jan 29 11:39:34.056205 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:39:34.056569 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:39:34.080108 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:39:34.114876 augenrules[1792]: No rules Jan 29 11:39:34.116867 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:39:34.117284 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:39:34.118856 sudo[1769]: pam_unix(sudo:session): session closed for user root Jan 29 11:39:34.120655 sshd[1768]: Connection closed by 10.0.0.1 port 50948 Jan 29 11:39:34.120998 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 29 11:39:34.129894 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:50950.service - OpenSSH per-connection server daemon (10.0.0.1:50950). Jan 29 11:39:34.130475 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:50948.service: Deactivated successfully. Jan 29 11:39:34.133379 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:39:34.134432 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:39:34.135706 systemd-logind[1593]: Removed session 6. Jan 29 11:39:34.164874 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 50950 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:39:34.166628 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:39:34.171432 systemd-logind[1593]: New session 7 of user core. Jan 29 11:39:34.187970 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:39:34.244839 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:39:34.245269 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:39:34.767745 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:39:34.768059 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:39:35.356162 dockerd[1825]: time="2025-01-29T11:39:35.356065903Z" level=info msg="Starting up" Jan 29 11:39:36.579514 dockerd[1825]: time="2025-01-29T11:39:36.579403508Z" level=info msg="Loading containers: start." Jan 29 11:39:36.818557 kernel: Initializing XFRM netlink socket Jan 29 11:39:36.916365 systemd-networkd[1246]: docker0: Link UP Jan 29 11:39:36.956280 dockerd[1825]: time="2025-01-29T11:39:36.956199254Z" level=info msg="Loading containers: done." Jan 29 11:39:36.991749 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1227978284-merged.mount: Deactivated successfully. Jan 29 11:39:36.992312 dockerd[1825]: time="2025-01-29T11:39:36.992264241Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:39:36.992445 dockerd[1825]: time="2025-01-29T11:39:36.992415172Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:39:36.992589 dockerd[1825]: time="2025-01-29T11:39:36.992573542Z" level=info msg="Daemon has completed initialization" Jan 29 11:39:37.039061 dockerd[1825]: time="2025-01-29T11:39:37.038992853Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:39:37.039227 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:39:38.084068 containerd[1614]: time="2025-01-29T11:39:38.083944058Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:39:39.260760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381508562.mount: Deactivated successfully. Jan 29 11:39:40.098723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:39:40.107691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:39:40.283001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:39:40.288455 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:39:40.411348 kubelet[2087]: E0129 11:39:40.411022 2087 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:39:40.418089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:39:40.418357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:39:42.329039 containerd[1614]: time="2025-01-29T11:39:42.328976361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:42.329657 containerd[1614]: time="2025-01-29T11:39:42.329604673Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:39:42.330752 containerd[1614]: time="2025-01-29T11:39:42.330725377Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:42.334268 containerd[1614]: time="2025-01-29T11:39:42.334236492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:42.335253 containerd[1614]: time="2025-01-29T11:39:42.335204485Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 4.251218059s" Jan 29 11:39:42.335253 containerd[1614]: time="2025-01-29T11:39:42.335240495Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:39:42.361496 containerd[1614]: time="2025-01-29T11:39:42.361193011Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:39:44.156357 containerd[1614]: time="2025-01-29T11:39:44.156286639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:44.157391 containerd[1614]: time="2025-01-29T11:39:44.157351436Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:39:44.158868 containerd[1614]: time="2025-01-29T11:39:44.158836268Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:44.162104 containerd[1614]: time="2025-01-29T11:39:44.162071482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:44.164109 containerd[1614]: time="2025-01-29T11:39:44.164069321Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.802830209s" Jan 29 11:39:44.164189 containerd[1614]: time="2025-01-29T11:39:44.164113924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:39:44.190627 containerd[1614]: time="2025-01-29T11:39:44.190315354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:39:45.588206 containerd[1614]: time="2025-01-29T11:39:45.588144005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:45.589123 containerd[1614]: time="2025-01-29T11:39:45.589079361Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:39:45.590447 containerd[1614]: time="2025-01-29T11:39:45.590414944Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:45.594614 containerd[1614]: time="2025-01-29T11:39:45.594580309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:45.595835 containerd[1614]: time="2025-01-29T11:39:45.595792838Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.405429882s" Jan 29 11:39:45.595835 containerd[1614]: time="2025-01-29T11:39:45.595828240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:39:45.617441 containerd[1614]: time="2025-01-29T11:39:45.617391668Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:39:47.061021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587837960.mount: Deactivated successfully. Jan 29 11:39:47.960540 containerd[1614]: time="2025-01-29T11:39:47.960434613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:47.962551 containerd[1614]: time="2025-01-29T11:39:47.962489642Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:39:47.966843 containerd[1614]: time="2025-01-29T11:39:47.966795266Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:47.969661 containerd[1614]: time="2025-01-29T11:39:47.969615313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:47.970132 containerd[1614]: time="2025-01-29T11:39:47.970090957Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.35266076s" Jan 29 11:39:47.970163 containerd[1614]: time="2025-01-29T11:39:47.970131414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:39:47.997630 containerd[1614]: time="2025-01-29T11:39:47.997574464Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:39:48.516548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268019645.mount: Deactivated successfully. Jan 29 11:39:49.762922 containerd[1614]: time="2025-01-29T11:39:49.762867217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:49.763647 containerd[1614]: time="2025-01-29T11:39:49.763584502Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:39:49.764816 containerd[1614]: time="2025-01-29T11:39:49.764787670Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:49.767967 containerd[1614]: time="2025-01-29T11:39:49.767930256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:49.769398 containerd[1614]: time="2025-01-29T11:39:49.769365395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.771749824s" Jan 29 11:39:49.769447 containerd[1614]: time="2025-01-29T11:39:49.769402947Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:39:49.796763 containerd[1614]: time="2025-01-29T11:39:49.796688523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:39:50.319510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1734533569.mount: Deactivated successfully. Jan 29 11:39:50.325643 containerd[1614]: time="2025-01-29T11:39:50.325595304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:50.326389 containerd[1614]: time="2025-01-29T11:39:50.326327732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:39:50.327589 containerd[1614]: time="2025-01-29T11:39:50.327561626Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:50.331473 containerd[1614]: time="2025-01-29T11:39:50.331437774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:50.332160 containerd[1614]: time="2025-01-29T11:39:50.332132140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 535.407248ms" Jan 29 11:39:50.332213 containerd[1614]: time="2025-01-29T11:39:50.332159035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:39:50.362643 containerd[1614]: time="2025-01-29T11:39:50.362585716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:39:50.668761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:39:50.683744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:39:50.849690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:39:50.854332 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:39:50.923171 kubelet[2218]: E0129 11:39:50.923012 2218 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:39:50.927796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:39:50.928066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:39:51.363456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259535188.mount: Deactivated successfully. Jan 29 11:39:55.013611 containerd[1614]: time="2025-01-29T11:39:55.013504314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:55.014595 containerd[1614]: time="2025-01-29T11:39:55.014550940Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:39:55.016110 containerd[1614]: time="2025-01-29T11:39:55.016075870Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:55.019225 containerd[1614]: time="2025-01-29T11:39:55.019175344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:39:55.020265 containerd[1614]: time="2025-01-29T11:39:55.020236772Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.657612115s" Jan 29 11:39:55.020315 containerd[1614]: time="2025-01-29T11:39:55.020265725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:39:57.552400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:39:57.566806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:39:57.585482 systemd[1]: Reloading requested from client PID 2357 ('systemctl') (unit session-7.scope)... Jan 29 11:39:57.585506 systemd[1]: Reloading... Jan 29 11:39:57.693593 zram_generator::config[2396]: No configuration found. Jan 29 11:39:58.004685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:39:58.086408 systemd[1]: Reloading finished in 500 ms. Jan 29 11:39:58.137532 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:39:58.137643 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:39:58.138066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:39:58.140315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:39:58.320733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:39:58.326608 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:39:58.368444 kubelet[2456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:39:58.368444 kubelet[2456]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:39:58.368444 kubelet[2456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:39:58.369548 kubelet[2456]: I0129 11:39:58.369479 2456 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:39:58.964292 kubelet[2456]: I0129 11:39:58.964248 2456 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:39:58.964292 kubelet[2456]: I0129 11:39:58.964277 2456 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:39:58.964496 kubelet[2456]: I0129 11:39:58.964478 2456 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:39:59.128848 kubelet[2456]: I0129 11:39:59.128785 2456 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:39:59.129491 kubelet[2456]: E0129 11:39:59.129470 2456 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.139615 kubelet[2456]: I0129 11:39:59.139574 2456 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:39:59.140025 kubelet[2456]: I0129 11:39:59.139976 2456 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:39:59.140176 kubelet[2456]: I0129 11:39:59.140009 2456 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:39:59.140637 kubelet[2456]: I0129 11:39:59.140614 2456 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:39:59.140637 kubelet[2456]: I0129 11:39:59.140629 2456 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:39:59.140785 kubelet[2456]: I0129 11:39:59.140768 2456 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:39:59.141366 kubelet[2456]: I0129 11:39:59.141344 2456 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:39:59.141366 kubelet[2456]: I0129 11:39:59.141361 2456 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:39:59.141428 kubelet[2456]: I0129 11:39:59.141388 2456 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:39:59.141428 kubelet[2456]: I0129 11:39:59.141410 2456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:39:59.142011 kubelet[2456]: W0129 11:39:59.141919 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.142011 kubelet[2456]: E0129 11:39:59.141975 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.145902 kubelet[2456]: W0129 11:39:59.144512 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.145902 kubelet[2456]: E0129 11:39:59.144594 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.145902 kubelet[2456]: I0129 11:39:59.145572 2456 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:39:59.147137 kubelet[2456]: I0129 11:39:59.147105 2456 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:39:59.147230 kubelet[2456]: W0129 11:39:59.147216 2456 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:39:59.148564 kubelet[2456]: I0129 11:39:59.148533 2456 server.go:1264] "Started kubelet" Jan 29 11:39:59.148690 kubelet[2456]: I0129 11:39:59.148624 2456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:39:59.149142 kubelet[2456]: I0129 11:39:59.149124 2456 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:39:59.149305 kubelet[2456]: I0129 11:39:59.149282 2456 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:39:59.152968 kubelet[2456]: I0129 11:39:59.152115 2456 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:39:59.152968 kubelet[2456]: I0129 11:39:59.152493 2456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:39:59.154857 kubelet[2456]: E0129 11:39:59.154722 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:39:59.154857 kubelet[2456]: I0129 11:39:59.154766 2456 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:39:59.154957 kubelet[2456]: I0129 11:39:59.154872 2456 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:39:59.154957 kubelet[2456]: I0129 11:39:59.154939 2456 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:39:59.155376 kubelet[2456]: W0129 11:39:59.155323 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.155448 kubelet[2456]: E0129 11:39:59.155381 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.156204 kubelet[2456]: E0129 11:39:59.156063 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Jan 29 11:39:59.157122 kubelet[2456]: E0129 11:39:59.156541 2456 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:39:59.157122 kubelet[2456]: I0129 11:39:59.156597 2456 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:39:59.157122 kubelet[2456]: I0129 11:39:59.156674 2456 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:39:59.160042 kubelet[2456]: E0129 11:39:59.158640 2456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f26f1554e98c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:39:59.148488902 +0000 UTC m=+0.817262366,LastTimestamp:2025-01-29 11:39:59.148488902 +0000 UTC m=+0.817262366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:39:59.160536 kubelet[2456]: I0129 11:39:59.160507 2456 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:39:59.172718 kubelet[2456]: I0129 11:39:59.172644 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:39:59.174254 kubelet[2456]: I0129 11:39:59.174225 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:39:59.174320 kubelet[2456]: I0129 11:39:59.174261 2456 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:39:59.174320 kubelet[2456]: I0129 11:39:59.174287 2456 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:39:59.174377 kubelet[2456]: E0129 11:39:59.174345 2456 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:39:59.178701 kubelet[2456]: W0129 11:39:59.178661 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.178782 kubelet[2456]: E0129 11:39:59.178709 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.188006 kubelet[2456]: I0129 11:39:59.187973 2456 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:39:59.188006 kubelet[2456]: I0129 11:39:59.187997 2456 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:39:59.188006 kubelet[2456]: I0129 11:39:59.188015 2456 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:39:59.191200 kubelet[2456]: I0129 11:39:59.191179 2456 policy_none.go:49] "None policy: Start" Jan 29 11:39:59.191742 kubelet[2456]: I0129 11:39:59.191715 2456 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:39:59.191742 kubelet[2456]: I0129 11:39:59.191739 2456 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:39:59.199697 kubelet[2456]: I0129 11:39:59.199670 2456 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:39:59.200509 kubelet[2456]: I0129 11:39:59.199915 2456 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:39:59.200509 kubelet[2456]: I0129 11:39:59.200050 2456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:39:59.201674 kubelet[2456]: E0129 11:39:59.201657 2456 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:39:59.258169 kubelet[2456]: I0129 11:39:59.257446 2456 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:39:59.258169 kubelet[2456]: E0129 11:39:59.257938 2456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 29 11:39:59.275194 kubelet[2456]: I0129 11:39:59.275132 2456 topology_manager.go:215] "Topology Admit Handler" podUID="2cc6d9c9c3fe32b4a8f5da27b0265b7c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:39:59.276612 kubelet[2456]: I0129 11:39:59.276563 2456 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:39:59.277542 kubelet[2456]: I0129 11:39:59.277510 2456 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:39:59.357203 kubelet[2456]: E0129 11:39:59.357127 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Jan 29 11:39:59.456640 kubelet[2456]: I0129 11:39:59.456514 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:39:59.456640 kubelet[2456]: I0129 11:39:59.456592 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:39:59.456640 kubelet[2456]: I0129 11:39:59.456624 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cc6d9c9c3fe32b4a8f5da27b0265b7c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cc6d9c9c3fe32b4a8f5da27b0265b7c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:39:59.456640 kubelet[2456]: I0129 11:39:59.456649 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cc6d9c9c3fe32b4a8f5da27b0265b7c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2cc6d9c9c3fe32b4a8f5da27b0265b7c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:39:59.456640 kubelet[2456]: I0129 11:39:59.456667 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:39:59.457263 kubelet[2456]: I0129 11:39:59.456711 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:39:59.457263 kubelet[2456]: I0129 11:39:59.456739 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:39:59.457263 kubelet[2456]: I0129 11:39:59.456754 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cc6d9c9c3fe32b4a8f5da27b0265b7c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cc6d9c9c3fe32b4a8f5da27b0265b7c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:39:59.457263 kubelet[2456]: I0129 11:39:59.456771 2456 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:39:59.459778 kubelet[2456]: I0129 11:39:59.459748 2456 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:39:59.460112 kubelet[2456]: E0129 11:39:59.460073 2456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 29 11:39:59.583078 kubelet[2456]: E0129 11:39:59.582927 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:39:59.583833 containerd[1614]: time="2025-01-29T11:39:59.583729123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2cc6d9c9c3fe32b4a8f5da27b0265b7c,Namespace:kube-system,Attempt:0,}" Jan 29 11:39:59.584930 kubelet[2456]: E0129 11:39:59.584908 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:39:59.585219 containerd[1614]: time="2025-01-29T11:39:59.585192481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:39:59.586399 kubelet[2456]: E0129 11:39:59.586382 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:39:59.586783 containerd[1614]: time="2025-01-29T11:39:59.586747416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:39:59.757941 kubelet[2456]: E0129 11:39:59.757877 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Jan 29 11:39:59.862009 kubelet[2456]: I0129 11:39:59.861868 2456 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:39:59.862417 kubelet[2456]: E0129 11:39:59.862364 2456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 29 11:39:59.970672 kubelet[2456]: W0129 11:39:59.970578 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:39:59.970672 kubelet[2456]: E0129 11:39:59.970672 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:00.147476 kubelet[2456]: W0129 11:40:00.147311 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:00.147476 kubelet[2456]: E0129 11:40:00.147388 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:00.392163 kubelet[2456]: W0129 11:40:00.392059 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:00.392163 kubelet[2456]: E0129 11:40:00.392156 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:00.407402 kubelet[2456]: W0129 11:40:00.407282 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:00.407402 kubelet[2456]: E0129 11:40:00.407347 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:00.558419 kubelet[2456]: E0129 11:40:00.558358 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s" Jan 29 11:40:00.664504 kubelet[2456]: I0129 11:40:00.664380 2456 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:40:00.664789 kubelet[2456]: E0129 11:40:00.664767 2456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 29 11:40:01.310263 kubelet[2456]: E0129 11:40:01.310212 2456 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:02.052301 kubelet[2456]: W0129 11:40:02.052255 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:02.052301 kubelet[2456]: E0129 11:40:02.052301 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:02.131156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819173897.mount: Deactivated successfully. Jan 29 11:40:02.159584 kubelet[2456]: E0129 11:40:02.159499 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="3.2s" Jan 29 11:40:02.199005 containerd[1614]: time="2025-01-29T11:40:02.198954914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:40:02.206242 containerd[1614]: time="2025-01-29T11:40:02.206166867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:40:02.226461 containerd[1614]: time="2025-01-29T11:40:02.226377651Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:40:02.238379 containerd[1614]: time="2025-01-29T11:40:02.238293872Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:40:02.252749 containerd[1614]: time="2025-01-29T11:40:02.252710411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:40:02.261734 containerd[1614]: time="2025-01-29T11:40:02.261656332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:40:02.262632 containerd[1614]: time="2025-01-29T11:40:02.262577199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.67732596s" Jan 29 11:40:02.263472 containerd[1614]: time="2025-01-29T11:40:02.263425081Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:40:02.266711 kubelet[2456]: I0129 11:40:02.266664 2456 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:40:02.267137 kubelet[2456]: E0129 11:40:02.267092 2456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 29 11:40:02.268426 containerd[1614]: time="2025-01-29T11:40:02.268368521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:40:02.280170 containerd[1614]: time="2025-01-29T11:40:02.280114860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.693270016s" Jan 29 11:40:02.281048 containerd[1614]: time="2025-01-29T11:40:02.281013483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.69716085s" Jan 29 11:40:02.589628 containerd[1614]: time="2025-01-29T11:40:02.589474963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:02.589628 containerd[1614]: time="2025-01-29T11:40:02.589554573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:02.589628 containerd[1614]: time="2025-01-29T11:40:02.589569964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:02.592112 containerd[1614]: time="2025-01-29T11:40:02.591828105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:02.592112 containerd[1614]: time="2025-01-29T11:40:02.591895730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:02.592112 containerd[1614]: time="2025-01-29T11:40:02.591911151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:02.592112 containerd[1614]: time="2025-01-29T11:40:02.592024388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:02.593140 containerd[1614]: time="2025-01-29T11:40:02.592569182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:02.596092 containerd[1614]: time="2025-01-29T11:40:02.595464742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:02.596092 containerd[1614]: time="2025-01-29T11:40:02.595550825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:02.596092 containerd[1614]: time="2025-01-29T11:40:02.595567368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:02.596092 containerd[1614]: time="2025-01-29T11:40:02.595665234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:02.603121 kubelet[2456]: W0129 11:40:02.603085 2456 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:02.603276 kubelet[2456]: E0129 11:40:02.603263 2456 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 29 11:40:02.686839 containerd[1614]: time="2025-01-29T11:40:02.686787911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b006e2c64cf23aa319a031637da582b938bc932aa2f695ed519093f93e9917e1\"" Jan 29 11:40:02.688351 kubelet[2456]: E0129 11:40:02.688324 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:02.691865 containerd[1614]: time="2025-01-29T11:40:02.691668515Z" level=info msg="CreateContainer within sandbox \"b006e2c64cf23aa319a031637da582b938bc932aa2f695ed519093f93e9917e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:40:02.702845 containerd[1614]: time="2025-01-29T11:40:02.702780472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb4c23ca7d899ab475d3f919df1c24d24069dc524dd83569a0c073624afed12d\"" Jan 29 11:40:02.703591 kubelet[2456]: E0129 11:40:02.703553 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:02.705020 containerd[1614]: time="2025-01-29T11:40:02.704945165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2cc6d9c9c3fe32b4a8f5da27b0265b7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"168305c8e1c3a7ff9efaa35b113df0c872a6d857cbaff7a470f785af9c1ede56\"" Jan 29 11:40:02.705786 kubelet[2456]: E0129 11:40:02.705766 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:02.706115 containerd[1614]: time="2025-01-29T11:40:02.706088750Z" level=info msg="CreateContainer within sandbox \"fb4c23ca7d899ab475d3f919df1c24d24069dc524dd83569a0c073624afed12d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:40:02.708220 containerd[1614]: time="2025-01-29T11:40:02.708190237Z" level=info msg="CreateContainer within sandbox \"168305c8e1c3a7ff9efaa35b113df0c872a6d857cbaff7a470f785af9c1ede56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:40:02.784074 containerd[1614]: time="2025-01-29T11:40:02.784014235Z" level=info msg="CreateContainer within sandbox \"b006e2c64cf23aa319a031637da582b938bc932aa2f695ed519093f93e9917e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ccef691326b4d1f3a4e2cf1016a2ecac68c2b1ad09bff3a95449612a8d32735\"" Jan 29 11:40:02.784824 containerd[1614]: time="2025-01-29T11:40:02.784797737Z" level=info msg="StartContainer for \"6ccef691326b4d1f3a4e2cf1016a2ecac68c2b1ad09bff3a95449612a8d32735\"" Jan 29 11:40:02.791825 containerd[1614]: time="2025-01-29T11:40:02.791768616Z" level=info msg="CreateContainer within sandbox \"fb4c23ca7d899ab475d3f919df1c24d24069dc524dd83569a0c073624afed12d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"908c1d4468ff026187a029d0612db9917fd77c731b6d9f700cf4861a3b439db0\"" Jan 29 11:40:02.793337 containerd[1614]: time="2025-01-29T11:40:02.792629563Z" level=info msg="StartContainer for \"908c1d4468ff026187a029d0612db9917fd77c731b6d9f700cf4861a3b439db0\"" Jan 29 11:40:02.795828 containerd[1614]: time="2025-01-29T11:40:02.795791227Z" level=info msg="CreateContainer within sandbox \"168305c8e1c3a7ff9efaa35b113df0c872a6d857cbaff7a470f785af9c1ede56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"47f7ba945c5de4f5c147ccbf6355dcba720e6736c8c400c6f6ab9e1b1fccf6b8\"" Jan 29 11:40:02.796469 containerd[1614]: time="2025-01-29T11:40:02.796416171Z" level=info msg="StartContainer for \"47f7ba945c5de4f5c147ccbf6355dcba720e6736c8c400c6f6ab9e1b1fccf6b8\"" Jan 29 11:40:02.909559 containerd[1614]: time="2025-01-29T11:40:02.909239673Z" level=info msg="StartContainer for \"6ccef691326b4d1f3a4e2cf1016a2ecac68c2b1ad09bff3a95449612a8d32735\" returns successfully" Jan 29 11:40:02.910402 containerd[1614]: time="2025-01-29T11:40:02.910386524Z" level=info msg="StartContainer for \"908c1d4468ff026187a029d0612db9917fd77c731b6d9f700cf4861a3b439db0\" returns successfully" Jan 29 11:40:02.920272 containerd[1614]: time="2025-01-29T11:40:02.920147019Z" level=info msg="StartContainer for \"47f7ba945c5de4f5c147ccbf6355dcba720e6736c8c400c6f6ab9e1b1fccf6b8\" returns successfully" Jan 29 11:40:03.197411 kubelet[2456]: E0129 11:40:03.197050 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:03.204611 kubelet[2456]: E0129 11:40:03.203119 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:03.204611 kubelet[2456]: E0129 11:40:03.203715 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:04.209950 kubelet[2456]: E0129 11:40:04.209844 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:04.582629 kubelet[2456]: E0129 11:40:04.582409 2456 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f26f1554e98c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:39:59.148488902 +0000 UTC m=+0.817262366,LastTimestamp:2025-01-29 11:39:59.148488902 +0000 UTC m=+0.817262366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:40:04.837995 kubelet[2456]: E0129 11:40:04.837622 2456 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f26f155c8f404 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:39:59.156507652 +0000 UTC m=+0.825281116,LastTimestamp:2025-01-29 11:39:59.156507652 +0000 UTC m=+0.825281116,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:40:04.850789 kubelet[2456]: E0129 11:40:04.850655 2456 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:40:05.204429 kubelet[2456]: E0129 11:40:05.204303 2456 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:40:05.364207 kubelet[2456]: E0129 11:40:05.364159 2456 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:40:05.469110 kubelet[2456]: I0129 11:40:05.468944 2456 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:40:05.473113 kubelet[2456]: I0129 11:40:05.473086 2456 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:40:05.480869 kubelet[2456]: E0129 11:40:05.480827 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:05.581551 kubelet[2456]: E0129 11:40:05.581394 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:05.682323 kubelet[2456]: E0129 11:40:05.682265 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:05.783201 kubelet[2456]: E0129 11:40:05.783043 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:05.883572 kubelet[2456]: E0129 11:40:05.883505 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:05.984051 kubelet[2456]: E0129 11:40:05.984015 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.084625 kubelet[2456]: E0129 11:40:06.084512 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.185580 kubelet[2456]: E0129 11:40:06.185515 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.236408 kubelet[2456]: E0129 11:40:06.236378 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:06.286057 kubelet[2456]: E0129 11:40:06.286026 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.386684 kubelet[2456]: E0129 11:40:06.386563 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.487122 kubelet[2456]: E0129 11:40:06.487077 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.587623 kubelet[2456]: E0129 11:40:06.587566 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.688489 kubelet[2456]: E0129 11:40:06.688365 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.757001 systemd[1]: Reloading requested from client PID 2737 ('systemctl') (unit session-7.scope)... Jan 29 11:40:06.757019 systemd[1]: Reloading... Jan 29 11:40:06.788997 kubelet[2456]: E0129 11:40:06.788949 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.831559 zram_generator::config[2776]: No configuration found. Jan 29 11:40:06.890163 kubelet[2456]: E0129 11:40:06.890113 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:06.954539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:40:06.990696 kubelet[2456]: E0129 11:40:06.990632 2456 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:40:07.033185 systemd[1]: Reloading finished in 275 ms. Jan 29 11:40:07.069471 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:40:07.069722 kubelet[2456]: I0129 11:40:07.069471 2456 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:40:07.083807 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:40:07.084230 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:40:07.091729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:40:07.232425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:40:07.239143 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:40:07.285564 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:40:07.285564 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:40:07.285564 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:40:07.286000 kubelet[2831]: I0129 11:40:07.285618 2831 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:40:07.290698 kubelet[2831]: I0129 11:40:07.290665 2831 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:40:07.290698 kubelet[2831]: I0129 11:40:07.290690 2831 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:40:07.290893 kubelet[2831]: I0129 11:40:07.290878 2831 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:40:07.292044 kubelet[2831]: I0129 11:40:07.292023 2831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:40:07.293125 kubelet[2831]: I0129 11:40:07.293074 2831 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:40:07.301092 kubelet[2831]: I0129 11:40:07.301048 2831 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:40:07.301771 kubelet[2831]: I0129 11:40:07.301724 2831 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:40:07.301962 kubelet[2831]: I0129 11:40:07.301765 2831 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:40:07.302038 kubelet[2831]: I0129 11:40:07.301986 2831 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:40:07.302038 kubelet[2831]: I0129 11:40:07.302000 2831 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:40:07.302104 kubelet[2831]: I0129 11:40:07.302052 2831 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:40:07.302203 kubelet[2831]: I0129 11:40:07.302182 2831 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:40:07.302228 kubelet[2831]: I0129 11:40:07.302203 2831 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:40:07.302257 kubelet[2831]: I0129 11:40:07.302243 2831 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:40:07.302281 kubelet[2831]: I0129 11:40:07.302272 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:40:07.303325 kubelet[2831]: I0129 11:40:07.303300 2831 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:40:07.303559 kubelet[2831]: I0129 11:40:07.303538 2831 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:40:07.307612 kubelet[2831]: I0129 11:40:07.307586 2831 server.go:1264] "Started kubelet" Jan 29 11:40:07.309089 kubelet[2831]: I0129 11:40:07.307707 2831 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:40:07.309089 kubelet[2831]: I0129 11:40:07.307994 2831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:40:07.309089 kubelet[2831]: I0129 11:40:07.308300 2831 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:40:07.309089 kubelet[2831]: I0129 11:40:07.308737 2831 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:40:07.311214 kubelet[2831]: I0129 11:40:07.311186 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:40:07.311912 kubelet[2831]: I0129 11:40:07.311887 2831 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:40:07.311994 kubelet[2831]: I0129 11:40:07.311977 2831 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:40:07.312110 kubelet[2831]: I0129 11:40:07.312092 2831 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:40:07.315104 kubelet[2831]: I0129 11:40:07.315033 2831 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:40:07.315422 kubelet[2831]: I0129 11:40:07.315398 2831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:40:07.317279 kubelet[2831]: E0129 11:40:07.317255 2831 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:40:07.320542 kubelet[2831]: I0129 11:40:07.317636 2831 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:40:07.322700 kubelet[2831]: I0129 11:40:07.322635 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:40:07.323966 kubelet[2831]: I0129 11:40:07.323941 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:40:07.324014 kubelet[2831]: I0129 11:40:07.323975 2831 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:40:07.324014 kubelet[2831]: I0129 11:40:07.323994 2831 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:40:07.324052 kubelet[2831]: E0129 11:40:07.324035 2831 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:40:07.364978 kubelet[2831]: I0129 11:40:07.364937 2831 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:40:07.364978 kubelet[2831]: I0129 11:40:07.364961 2831 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:40:07.364978 kubelet[2831]: I0129 11:40:07.364980 2831 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:40:07.365126 kubelet[2831]: I0129 11:40:07.365116 2831 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:40:07.365148 kubelet[2831]: I0129 11:40:07.365126 2831 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:40:07.365148 kubelet[2831]: I0129 11:40:07.365144 2831 policy_none.go:49] "None policy: Start" Jan 29 11:40:07.365775 kubelet[2831]: I0129 11:40:07.365745 2831 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:40:07.365775 kubelet[2831]: I0129 11:40:07.365774 2831 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:40:07.365929 kubelet[2831]: I0129 11:40:07.365915 2831 state_mem.go:75] "Updated machine memory state" Jan 29 11:40:07.367410 kubelet[2831]: I0129 11:40:07.367388 2831 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:40:07.367618 kubelet[2831]: I0129 11:40:07.367572 2831 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:40:07.367813 kubelet[2831]: I0129 11:40:07.367675 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:40:07.416174 kubelet[2831]: I0129 11:40:07.416127 2831 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:40:07.424184 kubelet[2831]: I0129 11:40:07.424146 2831 topology_manager.go:215] "Topology Admit Handler" podUID="2cc6d9c9c3fe32b4a8f5da27b0265b7c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:40:07.424265 kubelet[2831]: I0129 11:40:07.424244 2831 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:40:07.424315 kubelet[2831]: I0129 11:40:07.424302 2831 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:40:07.612995 kubelet[2831]: I0129 11:40:07.612849 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:40:07.612995 kubelet[2831]: I0129 11:40:07.612894 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:40:07.612995 kubelet[2831]: I0129 11:40:07.612921 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:40:07.612995 kubelet[2831]: I0129 11:40:07.612944 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cc6d9c9c3fe32b4a8f5da27b0265b7c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cc6d9c9c3fe32b4a8f5da27b0265b7c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:40:07.612995 kubelet[2831]: I0129 11:40:07.612966 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:40:07.613191 kubelet[2831]: I0129 11:40:07.613023 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:40:07.613191 kubelet[2831]: I0129 11:40:07.613052 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:40:07.613191 kubelet[2831]: I0129 11:40:07.613072 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cc6d9c9c3fe32b4a8f5da27b0265b7c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cc6d9c9c3fe32b4a8f5da27b0265b7c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:40:07.613191 kubelet[2831]: I0129 11:40:07.613093 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cc6d9c9c3fe32b4a8f5da27b0265b7c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2cc6d9c9c3fe32b4a8f5da27b0265b7c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:40:07.693371 kubelet[2831]: I0129 11:40:07.693310 2831 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:40:07.694111 kubelet[2831]: I0129 11:40:07.693434 2831 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:40:07.794349 sudo[2865]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:40:07.794752 sudo[2865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:40:07.961456 kubelet[2831]: E0129 11:40:07.961297 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:07.961456 kubelet[2831]: E0129 11:40:07.961304 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:07.994217 kubelet[2831]: E0129 11:40:07.994165 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:08.263594 sudo[2865]: pam_unix(sudo:session): session closed for user root Jan 29 11:40:08.303546 kubelet[2831]: I0129 11:40:08.303488 2831 apiserver.go:52] "Watching apiserver" Jan 29 11:40:08.314249 kubelet[2831]: I0129 11:40:08.313073 2831 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:40:08.336316 kubelet[2831]: E0129 11:40:08.335589 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:08.344316 kubelet[2831]: E0129 11:40:08.343771 2831 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:40:08.344316 kubelet[2831]: E0129 11:40:08.343820 2831 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:40:08.344316 kubelet[2831]: E0129 11:40:08.344242 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:08.344316 kubelet[2831]: E0129 11:40:08.344249 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:08.412495 kubelet[2831]: I0129 11:40:08.412434 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.412400116 podStartE2EDuration="1.412400116s" podCreationTimestamp="2025-01-29 11:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:40:08.412284708 +0000 UTC m=+1.168421015" watchObservedRunningTime="2025-01-29 11:40:08.412400116 +0000 UTC m=+1.168536423" Jan 29 11:40:08.412699 kubelet[2831]: I0129 11:40:08.412560 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.412556233 podStartE2EDuration="1.412556233s" podCreationTimestamp="2025-01-29 11:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:40:08.391214096 +0000 UTC m=+1.147350403" watchObservedRunningTime="2025-01-29 11:40:08.412556233 +0000 UTC m=+1.168692541" Jan 29 11:40:08.427533 kubelet[2831]: I0129 11:40:08.427457 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.427439078 podStartE2EDuration="1.427439078s" podCreationTimestamp="2025-01-29 11:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:40:08.420054413 +0000 UTC m=+1.176190720" watchObservedRunningTime="2025-01-29 11:40:08.427439078 +0000 UTC m=+1.183575386" Jan 29 11:40:09.336609 kubelet[2831]: E0129 11:40:09.336579 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:09.337117 kubelet[2831]: E0129 11:40:09.336692 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:09.559847 sudo[1805]: pam_unix(sudo:session): session closed for user root Jan 29 11:40:09.561354 sshd[1804]: Connection closed by 10.0.0.1 port 50950 Jan 29 11:40:09.561876 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:09.566408 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:50950.service: Deactivated successfully. Jan 29 11:40:09.569200 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:40:09.569328 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:40:09.570939 systemd-logind[1593]: Removed session 7. Jan 29 11:40:10.338055 kubelet[2831]: E0129 11:40:10.338022 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:10.692679 kubelet[2831]: E0129 11:40:10.692558 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:11.338673 kubelet[2831]: E0129 11:40:11.338629 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:12.652290 update_engine[1597]: I20250129 11:40:12.652160 1597 update_attempter.cc:509] Updating boot flags... Jan 29 11:40:12.686548 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2914) Jan 29 11:40:12.716554 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2913) Jan 29 11:40:12.744590 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2913) Jan 29 11:40:17.762401 kubelet[2831]: E0129 11:40:17.762347 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:18.349278 kubelet[2831]: E0129 11:40:18.349239 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:19.688697 kubelet[2831]: E0129 11:40:19.688659 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:20.351929 kubelet[2831]: E0129 11:40:20.351889 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:20.696483 kubelet[2831]: E0129 11:40:20.696356 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:21.814349 kubelet[2831]: I0129 11:40:21.814311 2831 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:40:21.814932 containerd[1614]: time="2025-01-29T11:40:21.814696292Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:40:21.815246 kubelet[2831]: I0129 11:40:21.814936 2831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:40:22.622599 kubelet[2831]: I0129 11:40:22.622556 2831 topology_manager.go:215] "Topology Admit Handler" podUID="e32ed61e-9b13-482c-8ea7-66e3b6f8bd41" podNamespace="kube-system" podName="kube-proxy-mdrnq" Jan 29 11:40:22.626195 kubelet[2831]: I0129 11:40:22.626151 2831 topology_manager.go:215] "Topology Admit Handler" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" podNamespace="kube-system" podName="cilium-n82lj" Jan 29 11:40:22.798450 kubelet[2831]: I0129 11:40:22.798398 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-lib-modules\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798450 kubelet[2831]: I0129 11:40:22.798437 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-clustermesh-secrets\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798681 kubelet[2831]: I0129 11:40:22.798460 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-kernel\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798681 kubelet[2831]: I0129 11:40:22.798545 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hostproc\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798681 kubelet[2831]: I0129 11:40:22.798592 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-cgroup\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798681 kubelet[2831]: I0129 11:40:22.798613 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-config-path\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798681 kubelet[2831]: I0129 11:40:22.798631 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hubble-tls\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798681 kubelet[2831]: I0129 11:40:22.798647 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-etc-cni-netd\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798870 kubelet[2831]: I0129 11:40:22.798664 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr27w\" (UniqueName: \"kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-kube-api-access-qr27w\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798870 kubelet[2831]: I0129 11:40:22.798678 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e32ed61e-9b13-482c-8ea7-66e3b6f8bd41-kube-proxy\") pod \"kube-proxy-mdrnq\" (UID: \"e32ed61e-9b13-482c-8ea7-66e3b6f8bd41\") " pod="kube-system/kube-proxy-mdrnq" Jan 29 11:40:22.798870 kubelet[2831]: I0129 11:40:22.798694 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-xtables-lock\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.798870 kubelet[2831]: I0129 11:40:22.798710 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e32ed61e-9b13-482c-8ea7-66e3b6f8bd41-xtables-lock\") pod \"kube-proxy-mdrnq\" (UID: \"e32ed61e-9b13-482c-8ea7-66e3b6f8bd41\") " pod="kube-system/kube-proxy-mdrnq" Jan 29 11:40:22.798870 kubelet[2831]: I0129 11:40:22.798724 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e32ed61e-9b13-482c-8ea7-66e3b6f8bd41-lib-modules\") pod \"kube-proxy-mdrnq\" (UID: \"e32ed61e-9b13-482c-8ea7-66e3b6f8bd41\") " pod="kube-system/kube-proxy-mdrnq" Jan 29 11:40:22.798870 kubelet[2831]: I0129 11:40:22.798739 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cni-path\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.799032 kubelet[2831]: I0129 11:40:22.798762 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mv2\" (UniqueName: \"kubernetes.io/projected/e32ed61e-9b13-482c-8ea7-66e3b6f8bd41-kube-api-access-d5mv2\") pod \"kube-proxy-mdrnq\" (UID: \"e32ed61e-9b13-482c-8ea7-66e3b6f8bd41\") " pod="kube-system/kube-proxy-mdrnq" Jan 29 11:40:22.799032 kubelet[2831]: I0129 11:40:22.798824 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-run\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.799032 kubelet[2831]: I0129 11:40:22.798839 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-bpf-maps\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.799032 kubelet[2831]: I0129 11:40:22.798854 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-net\") pod \"cilium-n82lj\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " pod="kube-system/cilium-n82lj" Jan 29 11:40:22.817020 kubelet[2831]: I0129 11:40:22.816962 2831 topology_manager.go:215] "Topology Admit Handler" podUID="61fdd063-1c5b-45a4-b1e5-d82f811399be" podNamespace="kube-system" podName="cilium-operator-599987898-w2mfz" Jan 29 11:40:22.900900 kubelet[2831]: I0129 11:40:22.899693 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7cjz\" (UniqueName: \"kubernetes.io/projected/61fdd063-1c5b-45a4-b1e5-d82f811399be-kube-api-access-h7cjz\") pod \"cilium-operator-599987898-w2mfz\" (UID: \"61fdd063-1c5b-45a4-b1e5-d82f811399be\") " pod="kube-system/cilium-operator-599987898-w2mfz" Jan 29 11:40:22.900900 kubelet[2831]: I0129 11:40:22.899800 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61fdd063-1c5b-45a4-b1e5-d82f811399be-cilium-config-path\") pod \"cilium-operator-599987898-w2mfz\" (UID: \"61fdd063-1c5b-45a4-b1e5-d82f811399be\") " pod="kube-system/cilium-operator-599987898-w2mfz" Jan 29 11:40:22.928262 kubelet[2831]: E0129 11:40:22.928220 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:22.928843 containerd[1614]: time="2025-01-29T11:40:22.928805583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdrnq,Uid:e32ed61e-9b13-482c-8ea7-66e3b6f8bd41,Namespace:kube-system,Attempt:0,}" Jan 29 11:40:22.934575 kubelet[2831]: E0129 11:40:22.934547 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:22.935035 containerd[1614]: time="2025-01-29T11:40:22.934998264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n82lj,Uid:a7ddba80-d7de-4e64-8697-a9e6d39d9c98,Namespace:kube-system,Attempt:0,}" Jan 29 11:40:23.016677 containerd[1614]: time="2025-01-29T11:40:23.016492052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:23.016811 containerd[1614]: time="2025-01-29T11:40:23.016719468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:23.016811 containerd[1614]: time="2025-01-29T11:40:23.016758543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:23.018009 containerd[1614]: time="2025-01-29T11:40:23.017959111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:23.018817 containerd[1614]: time="2025-01-29T11:40:23.018683442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:23.018817 containerd[1614]: time="2025-01-29T11:40:23.018722788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:23.018817 containerd[1614]: time="2025-01-29T11:40:23.018733007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:23.018894 containerd[1614]: time="2025-01-29T11:40:23.018817018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:23.062282 containerd[1614]: time="2025-01-29T11:40:23.061981151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n82lj,Uid:a7ddba80-d7de-4e64-8697-a9e6d39d9c98,Namespace:kube-system,Attempt:0,} returns sandbox id \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\"" Jan 29 11:40:23.063274 kubelet[2831]: E0129 11:40:23.063254 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:23.065621 containerd[1614]: time="2025-01-29T11:40:23.065587249Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:40:23.068165 containerd[1614]: time="2025-01-29T11:40:23.068142569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdrnq,Uid:e32ed61e-9b13-482c-8ea7-66e3b6f8bd41,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8fa4d653aef7e5ac1724bac713508fb9d35e084a6723d0671f2be3195eed46a\"" Jan 29 11:40:23.069073 kubelet[2831]: E0129 11:40:23.069040 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:23.071060 containerd[1614]: time="2025-01-29T11:40:23.071017824Z" level=info msg="CreateContainer within sandbox \"b8fa4d653aef7e5ac1724bac713508fb9d35e084a6723d0671f2be3195eed46a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:40:23.089818 containerd[1614]: time="2025-01-29T11:40:23.089766486Z" level=info msg="CreateContainer within sandbox \"b8fa4d653aef7e5ac1724bac713508fb9d35e084a6723d0671f2be3195eed46a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"71eecb2238a6fe3e001bdb2dc7e131e5e8efeb788aa8cd69df2f6d6eb5e26570\"" Jan 29 11:40:23.090384 containerd[1614]: time="2025-01-29T11:40:23.090353073Z" level=info msg="StartContainer for \"71eecb2238a6fe3e001bdb2dc7e131e5e8efeb788aa8cd69df2f6d6eb5e26570\"" Jan 29 11:40:23.123199 kubelet[2831]: E0129 11:40:23.123136 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:23.124294 containerd[1614]: time="2025-01-29T11:40:23.124254409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-w2mfz,Uid:61fdd063-1c5b-45a4-b1e5-d82f811399be,Namespace:kube-system,Attempt:0,}" Jan 29 11:40:23.158628 containerd[1614]: time="2025-01-29T11:40:23.157661645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:23.158628 containerd[1614]: time="2025-01-29T11:40:23.157813476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:23.158628 containerd[1614]: time="2025-01-29T11:40:23.157826432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:23.158628 containerd[1614]: time="2025-01-29T11:40:23.157999795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:23.168439 containerd[1614]: time="2025-01-29T11:40:23.168399716Z" level=info msg="StartContainer for \"71eecb2238a6fe3e001bdb2dc7e131e5e8efeb788aa8cd69df2f6d6eb5e26570\" returns successfully" Jan 29 11:40:23.213470 containerd[1614]: time="2025-01-29T11:40:23.213398735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-w2mfz,Uid:61fdd063-1c5b-45a4-b1e5-d82f811399be,Namespace:kube-system,Attempt:0,} returns sandbox id \"06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce\"" Jan 29 11:40:23.214240 kubelet[2831]: E0129 11:40:23.214210 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:23.361605 kubelet[2831]: E0129 11:40:23.361418 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:23.369110 kubelet[2831]: I0129 11:40:23.369053 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdrnq" podStartSLOduration=1.36903365 podStartE2EDuration="1.36903365s" podCreationTimestamp="2025-01-29 11:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:40:23.368982663 +0000 UTC m=+16.125118970" watchObservedRunningTime="2025-01-29 11:40:23.36903365 +0000 UTC m=+16.125169967" Jan 29 11:40:27.814674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418286793.mount: Deactivated successfully. Jan 29 11:40:29.625602 systemd-resolved[1461]: Under memory pressure, flushing caches. Jan 29 11:40:29.625649 systemd-resolved[1461]: Flushed all caches. Jan 29 11:40:29.627616 systemd-journald[1151]: Under memory pressure, flushing caches. Jan 29 11:40:32.654276 containerd[1614]: time="2025-01-29T11:40:32.654204959Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:40:32.673319 containerd[1614]: time="2025-01-29T11:40:32.673222840Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:40:32.694381 containerd[1614]: time="2025-01-29T11:40:32.694328104Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:40:32.696381 containerd[1614]: time="2025-01-29T11:40:32.696338501Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.630568671s" Jan 29 11:40:32.696479 containerd[1614]: time="2025-01-29T11:40:32.696388035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:40:32.697653 containerd[1614]: time="2025-01-29T11:40:32.697321537Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:40:32.698545 containerd[1614]: time="2025-01-29T11:40:32.698500988Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:40:32.847465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876418911.mount: Deactivated successfully. Jan 29 11:40:32.889923 containerd[1614]: time="2025-01-29T11:40:32.889858592Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\"" Jan 29 11:40:32.890376 containerd[1614]: time="2025-01-29T11:40:32.890336643Z" level=info msg="StartContainer for \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\"" Jan 29 11:40:32.950936 containerd[1614]: time="2025-01-29T11:40:32.950807359Z" level=info msg="StartContainer for \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\" returns successfully" Jan 29 11:40:33.474230 kubelet[2831]: E0129 11:40:33.474187 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:33.844752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c-rootfs.mount: Deactivated successfully. Jan 29 11:40:33.851821 containerd[1614]: time="2025-01-29T11:40:33.851761887Z" level=info msg="shim disconnected" id=6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c namespace=k8s.io Jan 29 11:40:33.851821 containerd[1614]: time="2025-01-29T11:40:33.851815319Z" level=warning msg="cleaning up after shim disconnected" id=6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c namespace=k8s.io Jan 29 11:40:33.852205 containerd[1614]: time="2025-01-29T11:40:33.851823995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:40:34.477183 kubelet[2831]: E0129 11:40:34.477151 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:34.480778 containerd[1614]: time="2025-01-29T11:40:34.479184118Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:40:34.480776 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:38556.service - OpenSSH per-connection server daemon (10.0.0.1:38556). Jan 29 11:40:34.612515 sshd[3296]: Accepted publickey for core from 10.0.0.1 port 38556 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:40:34.614164 sshd-session[3296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:40:34.619708 systemd-logind[1593]: New session 8 of user core. Jan 29 11:40:34.625821 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:40:34.812410 sshd[3299]: Connection closed by 10.0.0.1 port 38556 Jan 29 11:40:34.812769 sshd-session[3296]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:34.816967 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:38556.service: Deactivated successfully. Jan 29 11:40:34.819534 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:40:34.819656 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:40:34.820778 systemd-logind[1593]: Removed session 8. Jan 29 11:40:34.964827 containerd[1614]: time="2025-01-29T11:40:34.964767190Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\"" Jan 29 11:40:34.965325 containerd[1614]: time="2025-01-29T11:40:34.965282112Z" level=info msg="StartContainer for \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\"" Jan 29 11:40:35.024418 containerd[1614]: time="2025-01-29T11:40:35.024371391Z" level=info msg="StartContainer for \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\" returns successfully" Jan 29 11:40:35.033739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:40:35.034069 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:40:35.034140 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:40:35.040987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:40:35.059195 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:40:35.082590 containerd[1614]: time="2025-01-29T11:40:35.082413802Z" level=info msg="shim disconnected" id=b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868 namespace=k8s.io Jan 29 11:40:35.082590 containerd[1614]: time="2025-01-29T11:40:35.082495889Z" level=warning msg="cleaning up after shim disconnected" id=b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868 namespace=k8s.io Jan 29 11:40:35.082590 containerd[1614]: time="2025-01-29T11:40:35.082507370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:40:35.480862 kubelet[2831]: E0129 11:40:35.480741 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:35.483947 containerd[1614]: time="2025-01-29T11:40:35.483000403Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:40:35.513263 containerd[1614]: time="2025-01-29T11:40:35.513218741Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\"" Jan 29 11:40:35.513986 containerd[1614]: time="2025-01-29T11:40:35.513939854Z" level=info msg="StartContainer for \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\"" Jan 29 11:40:35.585499 containerd[1614]: time="2025-01-29T11:40:35.585459084Z" level=info msg="StartContainer for \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\" returns successfully" Jan 29 11:40:35.839150 containerd[1614]: time="2025-01-29T11:40:35.839088800Z" level=info msg="shim disconnected" id=f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79 namespace=k8s.io Jan 29 11:40:35.839150 containerd[1614]: time="2025-01-29T11:40:35.839140498Z" level=warning msg="cleaning up after shim disconnected" id=f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79 namespace=k8s.io Jan 29 11:40:35.839150 containerd[1614]: time="2025-01-29T11:40:35.839148994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:40:35.933851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868-rootfs.mount: Deactivated successfully. Jan 29 11:40:35.955967 containerd[1614]: time="2025-01-29T11:40:35.955923321Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:40:35.956671 containerd[1614]: time="2025-01-29T11:40:35.956605231Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:40:35.957779 containerd[1614]: time="2025-01-29T11:40:35.957746124Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:40:35.959155 containerd[1614]: time="2025-01-29T11:40:35.959112978Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.261751616s" Jan 29 11:40:35.959155 containerd[1614]: time="2025-01-29T11:40:35.959149428Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:40:35.961279 containerd[1614]: time="2025-01-29T11:40:35.961247677Z" level=info msg="CreateContainer within sandbox \"06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:40:35.974232 containerd[1614]: time="2025-01-29T11:40:35.974186799Z" level=info msg="CreateContainer within sandbox \"06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\"" Jan 29 11:40:35.974691 containerd[1614]: time="2025-01-29T11:40:35.974667756Z" level=info msg="StartContainer for \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\"" Jan 29 11:40:36.138820 containerd[1614]: time="2025-01-29T11:40:36.138353829Z" level=info msg="StartContainer for \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\" returns successfully" Jan 29 11:40:36.487926 kubelet[2831]: E0129 11:40:36.487813 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:36.492133 kubelet[2831]: E0129 11:40:36.492100 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:36.494719 containerd[1614]: time="2025-01-29T11:40:36.494574193Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:40:36.503135 kubelet[2831]: I0129 11:40:36.502638 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-w2mfz" podStartSLOduration=1.758500689 podStartE2EDuration="14.50261157s" podCreationTimestamp="2025-01-29 11:40:22 +0000 UTC" firstStartedPulling="2025-01-29 11:40:23.215844213 +0000 UTC m=+15.971980510" lastFinishedPulling="2025-01-29 11:40:35.959955084 +0000 UTC m=+28.716091391" observedRunningTime="2025-01-29 11:40:36.496480043 +0000 UTC m=+29.252616350" watchObservedRunningTime="2025-01-29 11:40:36.50261157 +0000 UTC m=+29.258747897" Jan 29 11:40:36.544122 containerd[1614]: time="2025-01-29T11:40:36.544074427Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\"" Jan 29 11:40:36.545787 containerd[1614]: time="2025-01-29T11:40:36.545733947Z" level=info msg="StartContainer for \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\"" Jan 29 11:40:36.631086 containerd[1614]: time="2025-01-29T11:40:36.631030895Z" level=info msg="StartContainer for \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\" returns successfully" Jan 29 11:40:36.660234 containerd[1614]: time="2025-01-29T11:40:36.660168318Z" level=info msg="shim disconnected" id=2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53 namespace=k8s.io Jan 29 11:40:36.660234 containerd[1614]: time="2025-01-29T11:40:36.660222582Z" level=warning msg="cleaning up after shim disconnected" id=2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53 namespace=k8s.io Jan 29 11:40:36.660234 containerd[1614]: time="2025-01-29T11:40:36.660233442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:40:37.499036 kubelet[2831]: E0129 11:40:37.498181 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:37.499036 kubelet[2831]: E0129 11:40:37.498319 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:37.500997 containerd[1614]: time="2025-01-29T11:40:37.500411797Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:40:37.543957 containerd[1614]: time="2025-01-29T11:40:37.543909024Z" level=info msg="CreateContainer within sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\"" Jan 29 11:40:37.544662 containerd[1614]: time="2025-01-29T11:40:37.544505900Z" level=info msg="StartContainer for \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\"" Jan 29 11:40:37.604542 containerd[1614]: time="2025-01-29T11:40:37.604469272Z" level=info msg="StartContainer for \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\" returns successfully" Jan 29 11:40:37.824099 kubelet[2831]: I0129 11:40:37.824062 2831 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:40:37.849137 kubelet[2831]: I0129 11:40:37.849066 2831 topology_manager.go:215] "Topology Admit Handler" podUID="04a67990-f0cf-4657-9691-3c0004be12bb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gwjcg" Jan 29 11:40:37.849388 kubelet[2831]: I0129 11:40:37.849357 2831 topology_manager.go:215] "Topology Admit Handler" podUID="300f469d-4ac0-46a5-840b-3c55f458aa66" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xq9pk" Jan 29 11:40:37.917248 kubelet[2831]: I0129 11:40:37.917207 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/300f469d-4ac0-46a5-840b-3c55f458aa66-config-volume\") pod \"coredns-7db6d8ff4d-xq9pk\" (UID: \"300f469d-4ac0-46a5-840b-3c55f458aa66\") " pod="kube-system/coredns-7db6d8ff4d-xq9pk" Jan 29 11:40:37.917248 kubelet[2831]: I0129 11:40:37.917249 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04a67990-f0cf-4657-9691-3c0004be12bb-config-volume\") pod \"coredns-7db6d8ff4d-gwjcg\" (UID: \"04a67990-f0cf-4657-9691-3c0004be12bb\") " pod="kube-system/coredns-7db6d8ff4d-gwjcg" Jan 29 11:40:37.917248 kubelet[2831]: I0129 11:40:37.917266 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45f8r\" (UniqueName: \"kubernetes.io/projected/300f469d-4ac0-46a5-840b-3c55f458aa66-kube-api-access-45f8r\") pod \"coredns-7db6d8ff4d-xq9pk\" (UID: \"300f469d-4ac0-46a5-840b-3c55f458aa66\") " pod="kube-system/coredns-7db6d8ff4d-xq9pk" Jan 29 11:40:37.917461 kubelet[2831]: I0129 11:40:37.917328 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2g7\" (UniqueName: \"kubernetes.io/projected/04a67990-f0cf-4657-9691-3c0004be12bb-kube-api-access-lc2g7\") pod \"coredns-7db6d8ff4d-gwjcg\" (UID: \"04a67990-f0cf-4657-9691-3c0004be12bb\") " pod="kube-system/coredns-7db6d8ff4d-gwjcg" Jan 29 11:40:38.155737 kubelet[2831]: E0129 11:40:38.155617 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:38.156489 containerd[1614]: time="2025-01-29T11:40:38.156459141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xq9pk,Uid:300f469d-4ac0-46a5-840b-3c55f458aa66,Namespace:kube-system,Attempt:0,}" Jan 29 11:40:38.162081 kubelet[2831]: E0129 11:40:38.162059 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:38.162538 containerd[1614]: time="2025-01-29T11:40:38.162353057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gwjcg,Uid:04a67990-f0cf-4657-9691-3c0004be12bb,Namespace:kube-system,Attempt:0,}" Jan 29 11:40:38.502461 kubelet[2831]: E0129 11:40:38.502326 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:38.586585 kubelet[2831]: I0129 11:40:38.586506 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n82lj" podStartSLOduration=6.953432299 podStartE2EDuration="16.586483628s" podCreationTimestamp="2025-01-29 11:40:22 +0000 UTC" firstStartedPulling="2025-01-29 11:40:23.06409874 +0000 UTC m=+15.820235047" lastFinishedPulling="2025-01-29 11:40:32.697150069 +0000 UTC m=+25.453286376" observedRunningTime="2025-01-29 11:40:38.58619525 +0000 UTC m=+31.342331557" watchObservedRunningTime="2025-01-29 11:40:38.586483628 +0000 UTC m=+31.342619935" Jan 29 11:40:39.503662 kubelet[2831]: E0129 11:40:39.503620 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:39.784717 systemd-networkd[1246]: cilium_host: Link UP Jan 29 11:40:39.784902 systemd-networkd[1246]: cilium_net: Link UP Jan 29 11:40:39.785228 systemd-networkd[1246]: cilium_net: Gained carrier Jan 29 11:40:39.785461 systemd-networkd[1246]: cilium_host: Gained carrier Jan 29 11:40:39.785669 systemd-networkd[1246]: cilium_net: Gained IPv6LL Jan 29 11:40:39.785854 systemd-networkd[1246]: cilium_host: Gained IPv6LL Jan 29 11:40:39.822761 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:34716.service - OpenSSH per-connection server daemon (10.0.0.1:34716). Jan 29 11:40:39.862050 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 34716 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:40:39.863556 sshd-session[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:40:39.868545 systemd-logind[1593]: New session 9 of user core. Jan 29 11:40:39.875794 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:40:39.891452 systemd-networkd[1246]: cilium_vxlan: Link UP Jan 29 11:40:39.891464 systemd-networkd[1246]: cilium_vxlan: Gained carrier Jan 29 11:40:39.999356 sshd[3764]: Connection closed by 10.0.0.1 port 34716 Jan 29 11:40:40.001041 sshd-session[3720]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:40.006296 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:34716.service: Deactivated successfully. Jan 29 11:40:40.009379 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:40:40.009433 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:40:40.011113 systemd-logind[1593]: Removed session 9. Jan 29 11:40:40.103562 kernel: NET: Registered PF_ALG protocol family Jan 29 11:40:40.505379 kubelet[2831]: E0129 11:40:40.505249 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:40.778241 systemd-networkd[1246]: lxc_health: Link UP Jan 29 11:40:40.786580 systemd-networkd[1246]: lxc_health: Gained carrier Jan 29 11:40:41.136020 systemd-networkd[1246]: lxc63305a09c183: Link UP Jan 29 11:40:41.144534 kernel: eth0: renamed from tmp5048c Jan 29 11:40:41.172701 kernel: eth0: renamed from tmp7d2e0 Jan 29 11:40:41.181836 systemd-networkd[1246]: lxc63305a09c183: Gained carrier Jan 29 11:40:41.183039 systemd-networkd[1246]: lxc1d9a74d92fa5: Link UP Jan 29 11:40:41.187204 systemd-networkd[1246]: lxc1d9a74d92fa5: Gained carrier Jan 29 11:40:41.212622 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL Jan 29 11:40:41.508236 kubelet[2831]: E0129 11:40:41.508096 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:42.297663 systemd-networkd[1246]: lxc_health: Gained IPv6LL Jan 29 11:40:42.509200 kubelet[2831]: E0129 11:40:42.509154 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:42.809692 systemd-networkd[1246]: lxc1d9a74d92fa5: Gained IPv6LL Jan 29 11:40:43.129674 systemd-networkd[1246]: lxc63305a09c183: Gained IPv6LL Jan 29 11:40:43.510721 kubelet[2831]: E0129 11:40:43.510596 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:44.711752 containerd[1614]: time="2025-01-29T11:40:44.711376837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:44.712231 containerd[1614]: time="2025-01-29T11:40:44.712039095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:44.712231 containerd[1614]: time="2025-01-29T11:40:44.712078470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:44.712294 containerd[1614]: time="2025-01-29T11:40:44.712214698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:44.712675 containerd[1614]: time="2025-01-29T11:40:44.712363371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:40:44.712766 containerd[1614]: time="2025-01-29T11:40:44.712476616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:40:44.712766 containerd[1614]: time="2025-01-29T11:40:44.712543012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:44.712766 containerd[1614]: time="2025-01-29T11:40:44.712622152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:40:44.742586 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:40:44.770577 containerd[1614]: time="2025-01-29T11:40:44.770514509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xq9pk,Uid:300f469d-4ac0-46a5-840b-3c55f458aa66,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d2e06abccf02c3908f807624568a85d62985ca2cbb84270fadd571b7d8537dc\"" Jan 29 11:40:44.771571 kubelet[2831]: E0129 11:40:44.771279 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:44.774196 containerd[1614]: time="2025-01-29T11:40:44.774163290Z" level=info msg="CreateContainer within sandbox \"7d2e06abccf02c3908f807624568a85d62985ca2cbb84270fadd571b7d8537dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:40:44.775964 containerd[1614]: time="2025-01-29T11:40:44.775942289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gwjcg,Uid:04a67990-f0cf-4657-9691-3c0004be12bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5048c1a9a56efaff5d99f9e86c0217bb5e53ad6fda1d7883f314b5be6c82b71c\"" Jan 29 11:40:44.776616 kubelet[2831]: E0129 11:40:44.776587 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:44.778245 containerd[1614]: time="2025-01-29T11:40:44.778221177Z" level=info msg="CreateContainer within sandbox \"5048c1a9a56efaff5d99f9e86c0217bb5e53ad6fda1d7883f314b5be6c82b71c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:40:44.804349 containerd[1614]: time="2025-01-29T11:40:44.804296924Z" level=info msg="CreateContainer within sandbox \"7d2e06abccf02c3908f807624568a85d62985ca2cbb84270fadd571b7d8537dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1983b1a5562aea3c0f010f735ca0e47758ea41ce2cb219de422f44df3f05ddf9\"" Jan 29 11:40:44.805047 containerd[1614]: time="2025-01-29T11:40:44.805019767Z" level=info msg="StartContainer for \"1983b1a5562aea3c0f010f735ca0e47758ea41ce2cb219de422f44df3f05ddf9\"" Jan 29 11:40:44.809787 containerd[1614]: time="2025-01-29T11:40:44.809752857Z" level=info msg="CreateContainer within sandbox \"5048c1a9a56efaff5d99f9e86c0217bb5e53ad6fda1d7883f314b5be6c82b71c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b4fa3ab85408d9c2b4ab492bb144c7cc702c7e29c9d280065f0bcb4f8e91ba9\"" Jan 29 11:40:44.810235 containerd[1614]: time="2025-01-29T11:40:44.810214303Z" level=info msg="StartContainer for \"9b4fa3ab85408d9c2b4ab492bb144c7cc702c7e29c9d280065f0bcb4f8e91ba9\"" Jan 29 11:40:44.869316 containerd[1614]: time="2025-01-29T11:40:44.869234002Z" level=info msg="StartContainer for \"1983b1a5562aea3c0f010f735ca0e47758ea41ce2cb219de422f44df3f05ddf9\" returns successfully" Jan 29 11:40:44.869316 containerd[1614]: time="2025-01-29T11:40:44.869239242Z" level=info msg="StartContainer for \"9b4fa3ab85408d9c2b4ab492bb144c7cc702c7e29c9d280065f0bcb4f8e91ba9\" returns successfully" Jan 29 11:40:45.018751 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:34724.service - OpenSSH per-connection server daemon (10.0.0.1:34724). Jan 29 11:40:45.058920 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 34724 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:40:45.060784 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:40:45.064969 systemd-logind[1593]: New session 10 of user core. Jan 29 11:40:45.072860 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:40:45.195746 sshd[4249]: Connection closed by 10.0.0.1 port 34724 Jan 29 11:40:45.196129 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:45.200709 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:34724.service: Deactivated successfully. Jan 29 11:40:45.203484 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:40:45.204394 systemd-logind[1593]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:40:45.205378 systemd-logind[1593]: Removed session 10. Jan 29 11:40:45.517437 kubelet[2831]: E0129 11:40:45.516333 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:45.518037 kubelet[2831]: E0129 11:40:45.517990 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:45.533895 kubelet[2831]: I0129 11:40:45.533814 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gwjcg" podStartSLOduration=23.533792461 podStartE2EDuration="23.533792461s" podCreationTimestamp="2025-01-29 11:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:40:45.533656923 +0000 UTC m=+38.289793260" watchObservedRunningTime="2025-01-29 11:40:45.533792461 +0000 UTC m=+38.289928768" Jan 29 11:40:45.559188 kubelet[2831]: I0129 11:40:45.557922 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xq9pk" podStartSLOduration=23.557897523 podStartE2EDuration="23.557897523s" podCreationTimestamp="2025-01-29 11:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:40:45.546618756 +0000 UTC m=+38.302755084" watchObservedRunningTime="2025-01-29 11:40:45.557897523 +0000 UTC m=+38.314033830" Jan 29 11:40:45.717713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624321978.mount: Deactivated successfully. Jan 29 11:40:46.520339 kubelet[2831]: E0129 11:40:46.520288 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:46.520339 kubelet[2831]: E0129 11:40:46.520348 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:47.523329 kubelet[2831]: E0129 11:40:47.521784 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:47.526260 kubelet[2831]: E0129 11:40:47.526236 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:40:50.211955 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:56134.service - OpenSSH per-connection server daemon (10.0.0.1:56134). Jan 29 11:40:50.246340 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 56134 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:40:50.248032 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:40:50.252224 systemd-logind[1593]: New session 11 of user core. Jan 29 11:40:50.259909 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:40:50.384556 sshd[4273]: Connection closed by 10.0.0.1 port 56134 Jan 29 11:40:50.385193 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:50.393815 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:56136.service - OpenSSH per-connection server daemon (10.0.0.1:56136). Jan 29 11:40:50.394648 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:56134.service: Deactivated successfully. Jan 29 11:40:50.397149 systemd-logind[1593]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:40:50.398072 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:40:50.399256 systemd-logind[1593]: Removed session 11. Jan 29 11:40:50.434652 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:40:50.436495 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:40:50.441095 systemd-logind[1593]: New session 12 of user core. Jan 29 11:40:50.454954 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:40:50.624282 sshd[4289]: Connection closed by 10.0.0.1 port 56136 Jan 29 11:40:50.624749 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:50.632789 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:56148.service - OpenSSH per-connection server daemon (10.0.0.1:56148). Jan 29 11:40:50.633376 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:56136.service: Deactivated successfully. Jan 29 11:40:50.636610 systemd-logind[1593]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:40:50.637574 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:40:50.638881 systemd-logind[1593]: Removed session 12. Jan 29 11:40:50.664889 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 56148 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:40:50.666265 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:40:50.670426 systemd-logind[1593]: New session 13 of user core. Jan 29 11:40:50.681852 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:40:50.802048 sshd[4302]: Connection closed by 10.0.0.1 port 56148 Jan 29 11:40:50.802491 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:50.805904 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:56148.service: Deactivated successfully. Jan 29 11:40:50.809971 systemd-logind[1593]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:40:50.810381 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:40:50.811726 systemd-logind[1593]: Removed session 13. Jan 29 11:40:55.817766 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:56150.service - OpenSSH per-connection server daemon (10.0.0.1:56150). Jan 29 11:40:55.850347 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 56150 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:40:55.852042 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:40:55.856418 systemd-logind[1593]: New session 14 of user core. Jan 29 11:40:55.869024 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:40:55.988070 sshd[4320]: Connection closed by 10.0.0.1 port 56150 Jan 29 11:40:55.988486 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 29 11:40:55.993020 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:56150.service: Deactivated successfully. Jan 29 11:40:55.996071 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:40:55.997233 systemd-logind[1593]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:40:55.998349 systemd-logind[1593]: Removed session 14. Jan 29 11:41:00.996738 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:36546.service - OpenSSH per-connection server daemon (10.0.0.1:36546). Jan 29 11:41:01.028560 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 36546 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:01.030378 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:01.034998 systemd-logind[1593]: New session 15 of user core. Jan 29 11:41:01.045859 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:41:01.156607 sshd[4335]: Connection closed by 10.0.0.1 port 36546 Jan 29 11:41:01.157156 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:01.166754 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:36550.service - OpenSSH per-connection server daemon (10.0.0.1:36550). Jan 29 11:41:01.167453 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:36546.service: Deactivated successfully. Jan 29 11:41:01.171333 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:41:01.172158 systemd-logind[1593]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:41:01.173372 systemd-logind[1593]: Removed session 15. Jan 29 11:41:01.198833 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 36550 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:01.200262 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:01.205004 systemd-logind[1593]: New session 16 of user core. Jan 29 11:41:01.218913 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:41:01.462992 sshd[4350]: Connection closed by 10.0.0.1 port 36550 Jan 29 11:41:01.463572 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:01.469750 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:36556.service - OpenSSH per-connection server daemon (10.0.0.1:36556). Jan 29 11:41:01.470480 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:36550.service: Deactivated successfully. Jan 29 11:41:01.474012 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:41:01.474979 systemd-logind[1593]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:41:01.475925 systemd-logind[1593]: Removed session 16. Jan 29 11:41:01.512535 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 36556 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:01.514178 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:01.518575 systemd-logind[1593]: New session 17 of user core. Jan 29 11:41:01.534922 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:41:02.840101 sshd[4364]: Connection closed by 10.0.0.1 port 36556 Jan 29 11:41:02.840485 sshd-session[4358]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:02.849940 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:36572.service - OpenSSH per-connection server daemon (10.0.0.1:36572). Jan 29 11:41:02.850449 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:36556.service: Deactivated successfully. Jan 29 11:41:02.853816 systemd-logind[1593]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:41:02.854844 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:41:02.856513 systemd-logind[1593]: Removed session 17. Jan 29 11:41:02.893323 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:02.895087 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:02.899423 systemd-logind[1593]: New session 18 of user core. Jan 29 11:41:02.905806 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:41:03.146425 sshd[4386]: Connection closed by 10.0.0.1 port 36572 Jan 29 11:41:03.147243 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:03.162214 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:36580.service - OpenSSH per-connection server daemon (10.0.0.1:36580). Jan 29 11:41:03.162872 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:36572.service: Deactivated successfully. Jan 29 11:41:03.164777 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:41:03.166595 systemd-logind[1593]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:41:03.167670 systemd-logind[1593]: Removed session 18. Jan 29 11:41:03.194018 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 36580 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:03.195718 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:03.200580 systemd-logind[1593]: New session 19 of user core. Jan 29 11:41:03.211848 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:41:03.317804 sshd[4399]: Connection closed by 10.0.0.1 port 36580 Jan 29 11:41:03.318175 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:03.322971 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:36580.service: Deactivated successfully. Jan 29 11:41:03.325148 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:41:03.325702 systemd-logind[1593]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:41:03.327023 systemd-logind[1593]: Removed session 19. Jan 29 11:41:08.333739 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:53074.service - OpenSSH per-connection server daemon (10.0.0.1:53074). Jan 29 11:41:08.364609 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 53074 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:08.365988 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:08.369766 systemd-logind[1593]: New session 20 of user core. Jan 29 11:41:08.376767 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:41:08.479592 sshd[4417]: Connection closed by 10.0.0.1 port 53074 Jan 29 11:41:08.479916 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:08.484008 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:53074.service: Deactivated successfully. Jan 29 11:41:08.486917 systemd-logind[1593]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:41:08.487059 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:41:08.488127 systemd-logind[1593]: Removed session 20. Jan 29 11:41:13.499921 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:53084.service - OpenSSH per-connection server daemon (10.0.0.1:53084). Jan 29 11:41:13.534552 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 53084 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:13.536121 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:13.540373 systemd-logind[1593]: New session 21 of user core. Jan 29 11:41:13.554832 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:41:13.666093 sshd[4435]: Connection closed by 10.0.0.1 port 53084 Jan 29 11:41:13.666447 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:13.669854 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:53084.service: Deactivated successfully. Jan 29 11:41:13.672365 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:41:13.673195 systemd-logind[1593]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:41:13.674177 systemd-logind[1593]: Removed session 21. Jan 29 11:41:18.676742 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:58404.service - OpenSSH per-connection server daemon (10.0.0.1:58404). Jan 29 11:41:18.708304 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 58404 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:18.709704 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:18.714179 systemd-logind[1593]: New session 22 of user core. Jan 29 11:41:18.723795 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:41:18.827467 sshd[4450]: Connection closed by 10.0.0.1 port 58404 Jan 29 11:41:18.827830 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:18.832587 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:58404.service: Deactivated successfully. Jan 29 11:41:18.835657 systemd-logind[1593]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:41:18.835755 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:41:18.836716 systemd-logind[1593]: Removed session 22. Jan 29 11:41:23.842823 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:58416.service - OpenSSH per-connection server daemon (10.0.0.1:58416). Jan 29 11:41:23.875558 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 58416 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:23.877183 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:23.881201 systemd-logind[1593]: New session 23 of user core. Jan 29 11:41:23.888791 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:41:23.996086 sshd[4467]: Connection closed by 10.0.0.1 port 58416 Jan 29 11:41:23.996494 sshd-session[4464]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:24.003719 systemd[1]: Started sshd@23-10.0.0.147:22-10.0.0.1:58418.service - OpenSSH per-connection server daemon (10.0.0.1:58418). Jan 29 11:41:24.004280 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:58416.service: Deactivated successfully. Jan 29 11:41:24.007425 systemd-logind[1593]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:41:24.008089 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:41:24.009163 systemd-logind[1593]: Removed session 23. Jan 29 11:41:24.036017 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 58418 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:24.037473 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:24.041362 systemd-logind[1593]: New session 24 of user core. Jan 29 11:41:24.050785 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:41:24.325287 kubelet[2831]: E0129 11:41:24.325247 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:25.480085 containerd[1614]: time="2025-01-29T11:41:25.479485768Z" level=info msg="StopContainer for \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\" with timeout 30 (s)" Jan 29 11:41:25.486626 containerd[1614]: time="2025-01-29T11:41:25.486575937Z" level=info msg="Stop container \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\" with signal terminated" Jan 29 11:41:25.524807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad-rootfs.mount: Deactivated successfully. Jan 29 11:41:25.526036 containerd[1614]: time="2025-01-29T11:41:25.525830957Z" level=info msg="StopContainer for \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\" with timeout 2 (s)" Jan 29 11:41:25.526149 containerd[1614]: time="2025-01-29T11:41:25.526123295Z" level=info msg="Stop container \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\" with signal terminated" Jan 29 11:41:25.533376 systemd-networkd[1246]: lxc_health: Link DOWN Jan 29 11:41:25.533387 systemd-networkd[1246]: lxc_health: Lost carrier Jan 29 11:41:25.535232 containerd[1614]: time="2025-01-29T11:41:25.535196042Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:41:25.536021 containerd[1614]: time="2025-01-29T11:41:25.535969656Z" level=info msg="shim disconnected" id=29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad namespace=k8s.io Jan 29 11:41:25.536021 containerd[1614]: time="2025-01-29T11:41:25.536018529Z" level=warning msg="cleaning up after shim disconnected" id=29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad namespace=k8s.io Jan 29 11:41:25.536021 containerd[1614]: time="2025-01-29T11:41:25.536027165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:25.552741 containerd[1614]: time="2025-01-29T11:41:25.552671982Z" level=info msg="StopContainer for \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\" returns successfully" Jan 29 11:41:25.558557 containerd[1614]: time="2025-01-29T11:41:25.558089936Z" level=info msg="StopPodSandbox for \"06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce\"" Jan 29 11:41:25.558557 containerd[1614]: time="2025-01-29T11:41:25.558140192Z" level=info msg="Container to stop \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:41:25.560825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce-shm.mount: Deactivated successfully. Jan 29 11:41:25.584148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1-rootfs.mount: Deactivated successfully. Jan 29 11:41:25.588789 containerd[1614]: time="2025-01-29T11:41:25.588727896Z" level=info msg="shim disconnected" id=54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1 namespace=k8s.io Jan 29 11:41:25.588789 containerd[1614]: time="2025-01-29T11:41:25.588781598Z" level=warning msg="cleaning up after shim disconnected" id=54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1 namespace=k8s.io Jan 29 11:41:25.588789 containerd[1614]: time="2025-01-29T11:41:25.588790324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:25.600105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce-rootfs.mount: Deactivated successfully. Jan 29 11:41:25.603569 containerd[1614]: time="2025-01-29T11:41:25.603490115Z" level=info msg="shim disconnected" id=06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce namespace=k8s.io Jan 29 11:41:25.603569 containerd[1614]: time="2025-01-29T11:41:25.603568063Z" level=warning msg="cleaning up after shim disconnected" id=06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce namespace=k8s.io Jan 29 11:41:25.603719 containerd[1614]: time="2025-01-29T11:41:25.603578073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:25.605114 containerd[1614]: time="2025-01-29T11:41:25.605079433Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:41:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:41:25.609678 containerd[1614]: time="2025-01-29T11:41:25.609513903Z" level=info msg="StopContainer for \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\" returns successfully" Jan 29 11:41:25.610137 containerd[1614]: time="2025-01-29T11:41:25.610118505Z" level=info msg="StopPodSandbox for \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\"" Jan 29 11:41:25.610507 containerd[1614]: time="2025-01-29T11:41:25.610277478Z" level=info msg="Container to stop \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:41:25.610507 containerd[1614]: time="2025-01-29T11:41:25.610325028Z" level=info msg="Container to stop \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:41:25.610507 containerd[1614]: time="2025-01-29T11:41:25.610333223Z" level=info msg="Container to stop \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:41:25.610507 containerd[1614]: time="2025-01-29T11:41:25.610342271Z" level=info msg="Container to stop \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:41:25.610507 containerd[1614]: time="2025-01-29T11:41:25.610350477Z" level=info msg="Container to stop \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:41:25.617999 containerd[1614]: time="2025-01-29T11:41:25.617960558Z" level=info msg="TearDown network for sandbox \"06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce\" successfully" Jan 29 11:41:25.617999 containerd[1614]: time="2025-01-29T11:41:25.617988080Z" level=info msg="StopPodSandbox for \"06fd2015e0681e5807e9efdf93c4346dd774dabc6ff1b62d8f639c1d3ecf51ce\" returns successfully" Jan 29 11:41:25.642050 containerd[1614]: time="2025-01-29T11:41:25.641980507Z" level=info msg="shim disconnected" id=73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e namespace=k8s.io Jan 29 11:41:25.642050 containerd[1614]: time="2025-01-29T11:41:25.642035983Z" level=warning msg="cleaning up after shim disconnected" id=73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e namespace=k8s.io Jan 29 11:41:25.642050 containerd[1614]: time="2025-01-29T11:41:25.642044809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:25.658409 containerd[1614]: time="2025-01-29T11:41:25.658358484Z" level=info msg="TearDown network for sandbox \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" successfully" Jan 29 11:41:25.658409 containerd[1614]: time="2025-01-29T11:41:25.658395806Z" level=info msg="StopPodSandbox for \"73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e\" returns successfully" Jan 29 11:41:25.766421 kubelet[2831]: I0129 11:41:25.766259 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-run\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.766421 kubelet[2831]: I0129 11:41:25.766304 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-cgroup\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.766421 kubelet[2831]: I0129 11:41:25.766347 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-config-path\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.766421 kubelet[2831]: I0129 11:41:25.766366 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-etc-cni-netd\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.766421 kubelet[2831]: I0129 11:41:25.766383 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-bpf-maps\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.766421 kubelet[2831]: I0129 11:41:25.766414 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61fdd063-1c5b-45a4-b1e5-d82f811399be-cilium-config-path\") pod \"61fdd063-1c5b-45a4-b1e5-d82f811399be\" (UID: \"61fdd063-1c5b-45a4-b1e5-d82f811399be\") " Jan 29 11:41:25.767831 kubelet[2831]: I0129 11:41:25.766428 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-lib-modules\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.767831 kubelet[2831]: I0129 11:41:25.766453 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-clustermesh-secrets\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.767831 kubelet[2831]: I0129 11:41:25.766468 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-net\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.767831 kubelet[2831]: I0129 11:41:25.766481 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-kernel\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.767831 kubelet[2831]: I0129 11:41:25.766496 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hubble-tls\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.767831 kubelet[2831]: I0129 11:41:25.766510 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cni-path\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.768073 kubelet[2831]: I0129 11:41:25.766544 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-xtables-lock\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.768073 kubelet[2831]: I0129 11:41:25.766560 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7cjz\" (UniqueName: \"kubernetes.io/projected/61fdd063-1c5b-45a4-b1e5-d82f811399be-kube-api-access-h7cjz\") pod \"61fdd063-1c5b-45a4-b1e5-d82f811399be\" (UID: \"61fdd063-1c5b-45a4-b1e5-d82f811399be\") " Jan 29 11:41:25.768073 kubelet[2831]: I0129 11:41:25.766575 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr27w\" (UniqueName: \"kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-kube-api-access-qr27w\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.768073 kubelet[2831]: I0129 11:41:25.766588 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hostproc\") pod \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\" (UID: \"a7ddba80-d7de-4e64-8697-a9e6d39d9c98\") " Jan 29 11:41:25.768073 kubelet[2831]: I0129 11:41:25.766418 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768188 kubelet[2831]: I0129 11:41:25.766451 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768188 kubelet[2831]: I0129 11:41:25.766433 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768188 kubelet[2831]: I0129 11:41:25.766494 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768188 kubelet[2831]: I0129 11:41:25.766506 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768188 kubelet[2831]: I0129 11:41:25.766629 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hostproc" (OuterVolumeSpecName: "hostproc") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768320 kubelet[2831]: I0129 11:41:25.767575 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cni-path" (OuterVolumeSpecName: "cni-path") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768320 kubelet[2831]: I0129 11:41:25.767638 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.768320 kubelet[2831]: I0129 11:41:25.767657 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.770661 kubelet[2831]: I0129 11:41:25.770275 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:41:25.770661 kubelet[2831]: I0129 11:41:25.770456 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fdd063-1c5b-45a4-b1e5-d82f811399be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61fdd063-1c5b-45a4-b1e5-d82f811399be" (UID: "61fdd063-1c5b-45a4-b1e5-d82f811399be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:41:25.770661 kubelet[2831]: I0129 11:41:25.770513 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:41:25.771551 kubelet[2831]: I0129 11:41:25.771533 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:25.771821 kubelet[2831]: I0129 11:41:25.771787 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:41:25.773173 kubelet[2831]: I0129 11:41:25.773151 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fdd063-1c5b-45a4-b1e5-d82f811399be-kube-api-access-h7cjz" (OuterVolumeSpecName: "kube-api-access-h7cjz") pod "61fdd063-1c5b-45a4-b1e5-d82f811399be" (UID: "61fdd063-1c5b-45a4-b1e5-d82f811399be"). InnerVolumeSpecName "kube-api-access-h7cjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:41:25.773918 kubelet[2831]: I0129 11:41:25.773887 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-kube-api-access-qr27w" (OuterVolumeSpecName: "kube-api-access-qr27w") pod "a7ddba80-d7de-4e64-8697-a9e6d39d9c98" (UID: "a7ddba80-d7de-4e64-8697-a9e6d39d9c98"). InnerVolumeSpecName "kube-api-access-qr27w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:41:25.867433 kubelet[2831]: I0129 11:41:25.867399 2831 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867433 kubelet[2831]: I0129 11:41:25.867424 2831 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867433 kubelet[2831]: I0129 11:41:25.867435 2831 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867433 kubelet[2831]: I0129 11:41:25.867455 2831 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867433 kubelet[2831]: I0129 11:41:25.867465 2831 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867474 2831 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61fdd063-1c5b-45a4-b1e5-d82f811399be-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867483 2831 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867491 2831 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867500 2831 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867508 2831 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867515 2831 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867541 2831 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867722 kubelet[2831]: I0129 11:41:25.867550 2831 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h7cjz\" (UniqueName: \"kubernetes.io/projected/61fdd063-1c5b-45a4-b1e5-d82f811399be-kube-api-access-h7cjz\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867936 kubelet[2831]: I0129 11:41:25.867558 2831 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867936 kubelet[2831]: I0129 11:41:25.867566 2831 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qr27w\" (UniqueName: \"kubernetes.io/projected/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-kube-api-access-qr27w\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:25.867936 kubelet[2831]: I0129 11:41:25.867575 2831 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7ddba80-d7de-4e64-8697-a9e6d39d9c98-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:41:26.502083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e-rootfs.mount: Deactivated successfully. Jan 29 11:41:26.502274 systemd[1]: var-lib-kubelet-pods-61fdd063\x2d1c5b\x2d45a4\x2db1e5\x2dd82f811399be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh7cjz.mount: Deactivated successfully. Jan 29 11:41:26.502418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73c56d584c46ee6f2806d494a593ec75516a40f8dd82dbf1b4700384b07d7d7e-shm.mount: Deactivated successfully. Jan 29 11:41:26.502595 systemd[1]: var-lib-kubelet-pods-a7ddba80\x2dd7de\x2d4e64\x2d8697\x2da9e6d39d9c98-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqr27w.mount: Deactivated successfully. Jan 29 11:41:26.502737 systemd[1]: var-lib-kubelet-pods-a7ddba80\x2dd7de\x2d4e64\x2d8697\x2da9e6d39d9c98-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:41:26.502909 systemd[1]: var-lib-kubelet-pods-a7ddba80\x2dd7de\x2d4e64\x2d8697\x2da9e6d39d9c98-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:41:26.609741 kubelet[2831]: I0129 11:41:26.609643 2831 scope.go:117] "RemoveContainer" containerID="29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad" Jan 29 11:41:26.615488 containerd[1614]: time="2025-01-29T11:41:26.615426459Z" level=info msg="RemoveContainer for \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\"" Jan 29 11:41:26.710979 containerd[1614]: time="2025-01-29T11:41:26.710850303Z" level=info msg="RemoveContainer for \"29f0edb10e9cd36e748b2f2711aa118790e4f9cd4c640ce236bd038991702aad\" returns successfully" Jan 29 11:41:26.711501 kubelet[2831]: I0129 11:41:26.711322 2831 scope.go:117] "RemoveContainer" containerID="54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1" Jan 29 11:41:26.712486 containerd[1614]: time="2025-01-29T11:41:26.712452586Z" level=info msg="RemoveContainer for \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\"" Jan 29 11:41:26.743420 containerd[1614]: time="2025-01-29T11:41:26.743366080Z" level=info msg="RemoveContainer for \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\" returns successfully" Jan 29 11:41:26.745529 kubelet[2831]: I0129 11:41:26.745453 2831 scope.go:117] "RemoveContainer" containerID="2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53" Jan 29 11:41:26.747819 containerd[1614]: time="2025-01-29T11:41:26.747775991Z" level=info msg="RemoveContainer for \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\"" Jan 29 11:41:26.752358 containerd[1614]: time="2025-01-29T11:41:26.752258140Z" level=info msg="RemoveContainer for \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\" returns successfully" Jan 29 11:41:26.752535 kubelet[2831]: I0129 11:41:26.752485 2831 scope.go:117] "RemoveContainer" containerID="f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79" Jan 29 11:41:26.753692 containerd[1614]: time="2025-01-29T11:41:26.753649320Z" level=info msg="RemoveContainer for \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\"" Jan 29 11:41:26.757706 containerd[1614]: time="2025-01-29T11:41:26.757662937Z" level=info msg="RemoveContainer for \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\" returns successfully" Jan 29 11:41:26.757896 kubelet[2831]: I0129 11:41:26.757871 2831 scope.go:117] "RemoveContainer" containerID="b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868" Jan 29 11:41:26.758873 containerd[1614]: time="2025-01-29T11:41:26.758840891Z" level=info msg="RemoveContainer for \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\"" Jan 29 11:41:26.763243 containerd[1614]: time="2025-01-29T11:41:26.763201408Z" level=info msg="RemoveContainer for \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\" returns successfully" Jan 29 11:41:26.763387 kubelet[2831]: I0129 11:41:26.763356 2831 scope.go:117] "RemoveContainer" containerID="6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c" Jan 29 11:41:26.764210 containerd[1614]: time="2025-01-29T11:41:26.764184432Z" level=info msg="RemoveContainer for \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\"" Jan 29 11:41:26.767833 containerd[1614]: time="2025-01-29T11:41:26.767802113Z" level=info msg="RemoveContainer for \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\" returns successfully" Jan 29 11:41:26.767965 kubelet[2831]: I0129 11:41:26.767935 2831 scope.go:117] "RemoveContainer" containerID="54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1" Jan 29 11:41:26.768452 containerd[1614]: time="2025-01-29T11:41:26.768397618Z" level=error msg="ContainerStatus for \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\": not found" Jan 29 11:41:26.775749 kubelet[2831]: E0129 11:41:26.775718 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\": not found" containerID="54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1" Jan 29 11:41:26.775878 kubelet[2831]: I0129 11:41:26.775757 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1"} err="failed to get container status \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"54635f5ae54e6422a1d5d582990a7eab12e2f472896a1b09f4115a4d5c46f9b1\": not found" Jan 29 11:41:26.775918 kubelet[2831]: I0129 11:41:26.775880 2831 scope.go:117] "RemoveContainer" containerID="2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53" Jan 29 11:41:26.776157 containerd[1614]: time="2025-01-29T11:41:26.776096405Z" level=error msg="ContainerStatus for \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\": not found" Jan 29 11:41:26.776302 kubelet[2831]: E0129 11:41:26.776273 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\": not found" containerID="2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53" Jan 29 11:41:26.776329 kubelet[2831]: I0129 11:41:26.776297 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53"} err="failed to get container status \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\": rpc error: code = NotFound desc = an error occurred when try to find container \"2579b5f8da0a3ad47ad1a398ff2dc18511bb5e43eb22204b395548023b4ffd53\": not found" Jan 29 11:41:26.776329 kubelet[2831]: I0129 11:41:26.776319 2831 scope.go:117] "RemoveContainer" containerID="f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79" Jan 29 11:41:26.776545 containerd[1614]: time="2025-01-29T11:41:26.776488862Z" level=error msg="ContainerStatus for \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\": not found" Jan 29 11:41:26.776692 kubelet[2831]: E0129 11:41:26.776671 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\": not found" containerID="f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79" Jan 29 11:41:26.776692 kubelet[2831]: I0129 11:41:26.776689 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79"} err="failed to get container status \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\": rpc error: code = NotFound desc = an error occurred when try to find container \"f90535142da78c3df7ec09929543c32547b385633a95b347bfbb2cf1c82e2e79\": not found" Jan 29 11:41:26.776761 kubelet[2831]: I0129 11:41:26.776700 2831 scope.go:117] "RemoveContainer" containerID="b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868" Jan 29 11:41:26.776848 containerd[1614]: time="2025-01-29T11:41:26.776821155Z" level=error msg="ContainerStatus for \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\": not found" Jan 29 11:41:26.776929 kubelet[2831]: E0129 11:41:26.776908 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\": not found" containerID="b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868" Jan 29 11:41:26.776986 kubelet[2831]: I0129 11:41:26.776928 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868"} err="failed to get container status \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\": rpc error: code = NotFound desc = an error occurred when try to find container \"b454bcf420a0cf2f561619ba46be818e6082af77a8e1107dd301d263fca17868\": not found" Jan 29 11:41:26.776986 kubelet[2831]: I0129 11:41:26.776942 2831 scope.go:117] "RemoveContainer" containerID="6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c" Jan 29 11:41:26.777087 containerd[1614]: time="2025-01-29T11:41:26.777058778Z" level=error msg="ContainerStatus for \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\": not found" Jan 29 11:41:26.777198 kubelet[2831]: E0129 11:41:26.777166 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\": not found" containerID="6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c" Jan 29 11:41:26.777198 kubelet[2831]: I0129 11:41:26.777189 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c"} err="failed to get container status \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6485121a8302bb4bf9e48a1003938c3e9f6bd43ab11f7b195349d1a47275be9c\": not found" Jan 29 11:41:27.324916 kubelet[2831]: E0129 11:41:27.324863 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:27.326798 kubelet[2831]: I0129 11:41:27.326764 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fdd063-1c5b-45a4-b1e5-d82f811399be" path="/var/lib/kubelet/pods/61fdd063-1c5b-45a4-b1e5-d82f811399be/volumes" Jan 29 11:41:27.327419 kubelet[2831]: I0129 11:41:27.327391 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" path="/var/lib/kubelet/pods/a7ddba80-d7de-4e64-8697-a9e6d39d9c98/volumes" Jan 29 11:41:27.387676 kubelet[2831]: E0129 11:41:27.387641 2831 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:41:27.447422 sshd[4483]: Connection closed by 10.0.0.1 port 58418 Jan 29 11:41:27.448070 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:27.453726 systemd[1]: Started sshd@24-10.0.0.147:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). Jan 29 11:41:27.454185 systemd[1]: sshd@23-10.0.0.147:22-10.0.0.1:58418.service: Deactivated successfully. Jan 29 11:41:27.458236 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:41:27.459361 systemd-logind[1593]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:41:27.460402 systemd-logind[1593]: Removed session 24. Jan 29 11:41:27.492344 sshd[4648]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:27.493775 sshd-session[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:27.497924 systemd-logind[1593]: New session 25 of user core. Jan 29 11:41:27.506775 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:41:27.911114 sshd[4654]: Connection closed by 10.0.0.1 port 58592 Jan 29 11:41:27.911655 sshd-session[4648]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:27.921782 systemd[1]: Started sshd@25-10.0.0.147:22-10.0.0.1:58602.service - OpenSSH per-connection server daemon (10.0.0.1:58602). Jan 29 11:41:27.922417 systemd[1]: sshd@24-10.0.0.147:22-10.0.0.1:58592.service: Deactivated successfully. Jan 29 11:41:27.932902 kubelet[2831]: I0129 11:41:27.931463 2831 topology_manager.go:215] "Topology Admit Handler" podUID="71545a83-b23d-4f47-b436-d3bd2c157993" podNamespace="kube-system" podName="cilium-6bmjd" Jan 29 11:41:27.932902 kubelet[2831]: E0129 11:41:27.931627 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" containerName="mount-cgroup" Jan 29 11:41:27.932902 kubelet[2831]: E0129 11:41:27.931644 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" containerName="mount-bpf-fs" Jan 29 11:41:27.932902 kubelet[2831]: E0129 11:41:27.931652 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61fdd063-1c5b-45a4-b1e5-d82f811399be" containerName="cilium-operator" Jan 29 11:41:27.932902 kubelet[2831]: E0129 11:41:27.931660 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" containerName="clean-cilium-state" Jan 29 11:41:27.932902 kubelet[2831]: E0129 11:41:27.931669 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" containerName="apply-sysctl-overwrites" Jan 29 11:41:27.932902 kubelet[2831]: E0129 11:41:27.931677 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" containerName="cilium-agent" Jan 29 11:41:27.932902 kubelet[2831]: I0129 11:41:27.931701 2831 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ddba80-d7de-4e64-8697-a9e6d39d9c98" containerName="cilium-agent" Jan 29 11:41:27.932902 kubelet[2831]: I0129 11:41:27.931709 2831 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fdd063-1c5b-45a4-b1e5-d82f811399be" containerName="cilium-operator" Jan 29 11:41:27.934488 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:41:27.934783 systemd-logind[1593]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:41:27.938731 systemd-logind[1593]: Removed session 25. Jan 29 11:41:27.939286 kubelet[2831]: W0129 11:41:27.939258 2831 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:41:27.939370 kubelet[2831]: E0129 11:41:27.939359 2831 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:41:27.956625 sshd[4662]: Accepted publickey for core from 10.0.0.1 port 58602 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:27.958248 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:27.962967 systemd-logind[1593]: New session 26 of user core. Jan 29 11:41:27.978795 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:41:28.030188 sshd[4668]: Connection closed by 10.0.0.1 port 58602 Jan 29 11:41:28.030535 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:28.038716 systemd[1]: Started sshd@26-10.0.0.147:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). Jan 29 11:41:28.039182 systemd[1]: sshd@25-10.0.0.147:22-10.0.0.1:58602.service: Deactivated successfully. Jan 29 11:41:28.042277 systemd-logind[1593]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:41:28.043181 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:41:28.044480 systemd-logind[1593]: Removed session 26. Jan 29 11:41:28.070916 sshd[4671]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:41:28.072280 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:41:28.076779 systemd-logind[1593]: New session 27 of user core. Jan 29 11:41:28.077669 kubelet[2831]: I0129 11:41:28.077637 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-cni-path\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077736 kubelet[2831]: I0129 11:41:28.077677 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-lib-modules\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077736 kubelet[2831]: I0129 11:41:28.077699 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-etc-cni-netd\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077736 kubelet[2831]: I0129 11:41:28.077713 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-cilium-cgroup\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077736 kubelet[2831]: I0129 11:41:28.077728 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71545a83-b23d-4f47-b436-d3bd2c157993-cilium-config-path\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077845 kubelet[2831]: I0129 11:41:28.077742 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-hostproc\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077845 kubelet[2831]: I0129 11:41:28.077757 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-xtables-lock\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077845 kubelet[2831]: I0129 11:41:28.077777 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71545a83-b23d-4f47-b436-d3bd2c157993-hubble-tls\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077845 kubelet[2831]: I0129 11:41:28.077793 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-host-proc-sys-net\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077845 kubelet[2831]: I0129 11:41:28.077807 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-bpf-maps\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077845 kubelet[2831]: I0129 11:41:28.077837 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71545a83-b23d-4f47-b436-d3bd2c157993-clustermesh-secrets\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077974 kubelet[2831]: I0129 11:41:28.077854 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-host-proc-sys-kernel\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077974 kubelet[2831]: I0129 11:41:28.077868 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71545a83-b23d-4f47-b436-d3bd2c157993-cilium-run\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077974 kubelet[2831]: I0129 11:41:28.077881 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71545a83-b23d-4f47-b436-d3bd2c157993-cilium-ipsec-secrets\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.077974 kubelet[2831]: I0129 11:41:28.077894 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv8xp\" (UniqueName: \"kubernetes.io/projected/71545a83-b23d-4f47-b436-d3bd2c157993-kube-api-access-mv8xp\") pod \"cilium-6bmjd\" (UID: \"71545a83-b23d-4f47-b436-d3bd2c157993\") " pod="kube-system/cilium-6bmjd" Jan 29 11:41:28.087853 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:41:29.181016 kubelet[2831]: E0129 11:41:29.180957 2831 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 11:41:29.181602 kubelet[2831]: E0129 11:41:29.181070 2831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71545a83-b23d-4f47-b436-d3bd2c157993-clustermesh-secrets podName:71545a83-b23d-4f47-b436-d3bd2c157993 nodeName:}" failed. No retries permitted until 2025-01-29 11:41:29.681048342 +0000 UTC m=+82.437184649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/71545a83-b23d-4f47-b436-d3bd2c157993-clustermesh-secrets") pod "cilium-6bmjd" (UID: "71545a83-b23d-4f47-b436-d3bd2c157993") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:41:29.326012 kubelet[2831]: E0129 11:41:29.325964 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:29.744842 kubelet[2831]: E0129 11:41:29.744805 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:29.745376 containerd[1614]: time="2025-01-29T11:41:29.745335621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6bmjd,Uid:71545a83-b23d-4f47-b436-d3bd2c157993,Namespace:kube-system,Attempt:0,}" Jan 29 11:41:29.765876 containerd[1614]: time="2025-01-29T11:41:29.765265480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:41:29.765876 containerd[1614]: time="2025-01-29T11:41:29.765854041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:41:29.765876 containerd[1614]: time="2025-01-29T11:41:29.765868488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:41:29.766049 containerd[1614]: time="2025-01-29T11:41:29.765964110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:41:29.804162 containerd[1614]: time="2025-01-29T11:41:29.804116658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6bmjd,Uid:71545a83-b23d-4f47-b436-d3bd2c157993,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\"" Jan 29 11:41:29.804803 kubelet[2831]: E0129 11:41:29.804782 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:29.806680 containerd[1614]: time="2025-01-29T11:41:29.806631707Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:41:29.819913 containerd[1614]: time="2025-01-29T11:41:29.819873792Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99deb18820451f6e6b197d65329cbd7c208e404cfd44bfc5d333a16130b6bc28\"" Jan 29 11:41:29.820326 containerd[1614]: time="2025-01-29T11:41:29.820304662Z" level=info msg="StartContainer for \"99deb18820451f6e6b197d65329cbd7c208e404cfd44bfc5d333a16130b6bc28\"" Jan 29 11:41:29.836490 kubelet[2831]: I0129 11:41:29.836429 2831 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:41:29Z","lastTransitionTime":"2025-01-29T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:41:29.875750 containerd[1614]: time="2025-01-29T11:41:29.875704853Z" level=info msg="StartContainer for \"99deb18820451f6e6b197d65329cbd7c208e404cfd44bfc5d333a16130b6bc28\" returns successfully" Jan 29 11:41:29.925871 containerd[1614]: time="2025-01-29T11:41:29.925782288Z" level=info msg="shim disconnected" id=99deb18820451f6e6b197d65329cbd7c208e404cfd44bfc5d333a16130b6bc28 namespace=k8s.io Jan 29 11:41:29.925871 containerd[1614]: time="2025-01-29T11:41:29.925846620Z" level=warning msg="cleaning up after shim disconnected" id=99deb18820451f6e6b197d65329cbd7c208e404cfd44bfc5d333a16130b6bc28 namespace=k8s.io Jan 29 11:41:29.925871 containerd[1614]: time="2025-01-29T11:41:29.925854976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:30.621538 kubelet[2831]: E0129 11:41:30.621483 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:30.623113 containerd[1614]: time="2025-01-29T11:41:30.623081112Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:41:30.641898 containerd[1614]: time="2025-01-29T11:41:30.641843483Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e936c9cfccb75d11ff5f2a0e84d9a93a9087a86f7af678fdc1a595c6655e3352\"" Jan 29 11:41:30.642461 containerd[1614]: time="2025-01-29T11:41:30.642393459Z" level=info msg="StartContainer for \"e936c9cfccb75d11ff5f2a0e84d9a93a9087a86f7af678fdc1a595c6655e3352\"" Jan 29 11:41:30.696883 containerd[1614]: time="2025-01-29T11:41:30.696834366Z" level=info msg="StartContainer for \"e936c9cfccb75d11ff5f2a0e84d9a93a9087a86f7af678fdc1a595c6655e3352\" returns successfully" Jan 29 11:41:30.723543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e936c9cfccb75d11ff5f2a0e84d9a93a9087a86f7af678fdc1a595c6655e3352-rootfs.mount: Deactivated successfully. Jan 29 11:41:30.726613 containerd[1614]: time="2025-01-29T11:41:30.726552154Z" level=info msg="shim disconnected" id=e936c9cfccb75d11ff5f2a0e84d9a93a9087a86f7af678fdc1a595c6655e3352 namespace=k8s.io Jan 29 11:41:30.726613 containerd[1614]: time="2025-01-29T11:41:30.726611097Z" level=warning msg="cleaning up after shim disconnected" id=e936c9cfccb75d11ff5f2a0e84d9a93a9087a86f7af678fdc1a595c6655e3352 namespace=k8s.io Jan 29 11:41:30.726613 containerd[1614]: time="2025-01-29T11:41:30.726619883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:31.624606 kubelet[2831]: E0129 11:41:31.624556 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:31.627343 containerd[1614]: time="2025-01-29T11:41:31.626514730Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:41:31.645116 containerd[1614]: time="2025-01-29T11:41:31.645069831Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dcd18ce8d329ccd80ef40233023188575d6347ff1d3ea364cafe27adff4150fa\"" Jan 29 11:41:31.645647 containerd[1614]: time="2025-01-29T11:41:31.645619608Z" level=info msg="StartContainer for \"dcd18ce8d329ccd80ef40233023188575d6347ff1d3ea364cafe27adff4150fa\"" Jan 29 11:41:31.705658 containerd[1614]: time="2025-01-29T11:41:31.705505724Z" level=info msg="StartContainer for \"dcd18ce8d329ccd80ef40233023188575d6347ff1d3ea364cafe27adff4150fa\" returns successfully" Jan 29 11:41:31.728377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcd18ce8d329ccd80ef40233023188575d6347ff1d3ea364cafe27adff4150fa-rootfs.mount: Deactivated successfully. Jan 29 11:41:31.734468 containerd[1614]: time="2025-01-29T11:41:31.734391038Z" level=info msg="shim disconnected" id=dcd18ce8d329ccd80ef40233023188575d6347ff1d3ea364cafe27adff4150fa namespace=k8s.io Jan 29 11:41:31.734584 containerd[1614]: time="2025-01-29T11:41:31.734464568Z" level=warning msg="cleaning up after shim disconnected" id=dcd18ce8d329ccd80ef40233023188575d6347ff1d3ea364cafe27adff4150fa namespace=k8s.io Jan 29 11:41:31.734584 containerd[1614]: time="2025-01-29T11:41:31.734479217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:32.388915 kubelet[2831]: E0129 11:41:32.388870 2831 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:41:32.627536 kubelet[2831]: E0129 11:41:32.627484 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:32.630099 containerd[1614]: time="2025-01-29T11:41:32.630047817Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:41:32.653811 containerd[1614]: time="2025-01-29T11:41:32.653673724Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2541ac67ed9fafc64ec06c816eb018eca01dc35811e17e8318ca2f7b3a2a53ad\"" Jan 29 11:41:32.654265 containerd[1614]: time="2025-01-29T11:41:32.654182963Z" level=info msg="StartContainer for \"2541ac67ed9fafc64ec06c816eb018eca01dc35811e17e8318ca2f7b3a2a53ad\"" Jan 29 11:41:32.715195 containerd[1614]: time="2025-01-29T11:41:32.715151182Z" level=info msg="StartContainer for \"2541ac67ed9fafc64ec06c816eb018eca01dc35811e17e8318ca2f7b3a2a53ad\" returns successfully" Jan 29 11:41:32.731237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2541ac67ed9fafc64ec06c816eb018eca01dc35811e17e8318ca2f7b3a2a53ad-rootfs.mount: Deactivated successfully. Jan 29 11:41:32.735118 containerd[1614]: time="2025-01-29T11:41:32.735067949Z" level=info msg="shim disconnected" id=2541ac67ed9fafc64ec06c816eb018eca01dc35811e17e8318ca2f7b3a2a53ad namespace=k8s.io Jan 29 11:41:32.735224 containerd[1614]: time="2025-01-29T11:41:32.735117042Z" level=warning msg="cleaning up after shim disconnected" id=2541ac67ed9fafc64ec06c816eb018eca01dc35811e17e8318ca2f7b3a2a53ad namespace=k8s.io Jan 29 11:41:32.735224 containerd[1614]: time="2025-01-29T11:41:32.735129547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:41:33.631539 kubelet[2831]: E0129 11:41:33.631482 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:33.633442 containerd[1614]: time="2025-01-29T11:41:33.633392844Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:41:33.652782 containerd[1614]: time="2025-01-29T11:41:33.652730283Z" level=info msg="CreateContainer within sandbox \"f0736532c3a22709b3066e551257be600c60e343cff6801f2595d6d694caece5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"833647f7577f526847438d47ab5b4a401e8071546ee1d77abf2bff811d86712e\"" Jan 29 11:41:33.653288 containerd[1614]: time="2025-01-29T11:41:33.653250403Z" level=info msg="StartContainer for \"833647f7577f526847438d47ab5b4a401e8071546ee1d77abf2bff811d86712e\"" Jan 29 11:41:33.715012 containerd[1614]: time="2025-01-29T11:41:33.714563842Z" level=info msg="StartContainer for \"833647f7577f526847438d47ab5b4a401e8071546ee1d77abf2bff811d86712e\" returns successfully" Jan 29 11:41:34.111559 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:41:34.635942 kubelet[2831]: E0129 11:41:34.635902 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:35.746429 kubelet[2831]: E0129 11:41:35.746379 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:37.081978 systemd-networkd[1246]: lxc_health: Link UP Jan 29 11:41:37.094707 systemd-networkd[1246]: lxc_health: Gained carrier Jan 29 11:41:37.325899 kubelet[2831]: E0129 11:41:37.325774 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:37.749555 kubelet[2831]: E0129 11:41:37.746628 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:37.766915 kubelet[2831]: I0129 11:41:37.765916 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6bmjd" podStartSLOduration=10.765900756 podStartE2EDuration="10.765900756s" podCreationTimestamp="2025-01-29 11:41:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:41:34.736398683 +0000 UTC m=+87.492534990" watchObservedRunningTime="2025-01-29 11:41:37.765900756 +0000 UTC m=+90.522037063" Jan 29 11:41:38.645841 kubelet[2831]: E0129 11:41:38.645801 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:38.674490 systemd[1]: run-containerd-runc-k8s.io-833647f7577f526847438d47ab5b4a401e8071546ee1d77abf2bff811d86712e-runc.SyDAJk.mount: Deactivated successfully. Jan 29 11:41:38.745697 systemd-networkd[1246]: lxc_health: Gained IPv6LL Jan 29 11:41:39.646823 kubelet[2831]: E0129 11:41:39.646784 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:41:42.910801 sshd[4677]: Connection closed by 10.0.0.1 port 58608 Jan 29 11:41:42.911255 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Jan 29 11:41:42.914921 systemd[1]: sshd@26-10.0.0.147:22-10.0.0.1:58608.service: Deactivated successfully. Jan 29 11:41:42.917200 systemd-logind[1593]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:41:42.917464 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:41:42.918360 systemd-logind[1593]: Removed session 27.