Jan 13 20:40:07.871555 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:40:07.871576 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:40:07.871587 kernel: BIOS-provided physical RAM map: Jan 13 20:40:07.871594 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:40:07.871600 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:40:07.871606 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:40:07.871622 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:40:07.871629 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:40:07.871635 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:40:07.871644 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:40:07.871650 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:40:07.871656 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:40:07.871662 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:40:07.871669 kernel: NX (Execute Disable) protection: active Jan 13 20:40:07.871676 kernel: APIC: Static calls initialized Jan 13 20:40:07.871685 kernel: SMBIOS 2.8 present. Jan 13 20:40:07.871692 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:40:07.871699 kernel: Hypervisor detected: KVM Jan 13 20:40:07.871706 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:40:07.871712 kernel: kvm-clock: using sched offset of 2261088398 cycles Jan 13 20:40:07.871719 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:40:07.871726 kernel: tsc: Detected 2794.748 MHz processor Jan 13 20:40:07.871733 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:40:07.871741 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:40:07.871747 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:40:07.871757 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:40:07.871764 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:40:07.871771 kernel: Using GB pages for direct mapping Jan 13 20:40:07.871778 kernel: ACPI: Early table checksum verification disabled Jan 13 20:40:07.871784 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:40:07.871791 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:07.871798 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:07.871805 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:07.871814 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:40:07.871821 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:07.871828 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:07.871835 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:07.871842 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:07.871848 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:40:07.871855 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:40:07.871866 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:40:07.871875 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:40:07.871882 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:40:07.871889 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:40:07.871896 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:40:07.871903 kernel: No NUMA configuration found Jan 13 20:40:07.871910 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:40:07.871950 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:40:07.871961 kernel: Zone ranges: Jan 13 20:40:07.871968 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:40:07.871975 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:40:07.871982 kernel: Normal empty Jan 13 20:40:07.871990 kernel: Movable zone start for each node Jan 13 20:40:07.871997 kernel: Early memory node ranges Jan 13 20:40:07.872004 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:40:07.872011 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:40:07.872018 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:40:07.872028 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:40:07.872035 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:40:07.872042 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:40:07.872049 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:40:07.872056 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:40:07.872063 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:40:07.872070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:40:07.872078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:40:07.872085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:40:07.872095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:40:07.872102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:40:07.872109 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:40:07.872116 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:40:07.872123 kernel: TSC deadline timer available Jan 13 20:40:07.872130 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:40:07.872137 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:40:07.872144 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:40:07.872152 kernel: kvm-guest: setup PV sched yield Jan 13 20:40:07.872161 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:40:07.872168 kernel: Booting paravirtualized kernel on KVM Jan 13 20:40:07.872175 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:40:07.872183 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:40:07.872190 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:40:07.872197 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:40:07.872204 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:40:07.872211 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:40:07.872218 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:40:07.872226 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:40:07.872236 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:40:07.872243 kernel: random: crng init done Jan 13 20:40:07.872250 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:40:07.872258 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:40:07.872265 kernel: Fallback order for Node 0: 0 Jan 13 20:40:07.872272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:40:07.872279 kernel: Policy zone: DMA32 Jan 13 20:40:07.872286 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:40:07.872296 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Jan 13 20:40:07.872303 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:40:07.872310 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:40:07.872318 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:40:07.872325 kernel: Dynamic Preempt: voluntary Jan 13 20:40:07.872332 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:40:07.872339 kernel: rcu: RCU event tracing is enabled. Jan 13 20:40:07.872347 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:40:07.872354 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:40:07.872364 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:40:07.872371 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:40:07.872378 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:40:07.872385 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:40:07.872393 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:40:07.872400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:40:07.872407 kernel: Console: colour VGA+ 80x25 Jan 13 20:40:07.872414 kernel: printk: console [ttyS0] enabled Jan 13 20:40:07.872421 kernel: ACPI: Core revision 20230628 Jan 13 20:40:07.872431 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:40:07.872438 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:40:07.872445 kernel: x2apic enabled Jan 13 20:40:07.872452 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:40:07.872460 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:40:07.872467 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:40:07.872474 kernel: kvm-guest: setup PV IPIs Jan 13 20:40:07.872491 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:40:07.872499 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:40:07.872506 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 20:40:07.872514 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:40:07.872521 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:40:07.872531 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:40:07.872539 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:40:07.872546 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:40:07.872554 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:40:07.872564 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:40:07.872571 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:40:07.872579 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:40:07.872587 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:40:07.872594 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:40:07.872602 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:40:07.872610 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:40:07.872624 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:40:07.872632 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:40:07.872642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:40:07.872649 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:40:07.872657 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:40:07.872664 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:40:07.872672 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:40:07.872679 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:40:07.872687 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:40:07.872694 kernel: landlock: Up and running. Jan 13 20:40:07.872701 kernel: SELinux: Initializing. Jan 13 20:40:07.872711 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:40:07.872719 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:40:07.872727 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:40:07.872734 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:40:07.872742 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:40:07.872750 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:40:07.872757 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:40:07.872765 kernel: ... version: 0 Jan 13 20:40:07.872775 kernel: ... bit width: 48 Jan 13 20:40:07.872782 kernel: ... generic registers: 6 Jan 13 20:40:07.872790 kernel: ... value mask: 0000ffffffffffff Jan 13 20:40:07.872797 kernel: ... max period: 00007fffffffffff Jan 13 20:40:07.872804 kernel: ... fixed-purpose events: 0 Jan 13 20:40:07.872812 kernel: ... event mask: 000000000000003f Jan 13 20:40:07.872819 kernel: signal: max sigframe size: 1776 Jan 13 20:40:07.872827 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:40:07.872834 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:40:07.872842 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:40:07.872851 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:40:07.872858 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:40:07.872866 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:40:07.872873 kernel: smpboot: Max logical packages: 1 Jan 13 20:40:07.872881 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 20:40:07.872888 kernel: devtmpfs: initialized Jan 13 20:40:07.872895 kernel: x86/mm: Memory block size: 128MB Jan 13 20:40:07.872903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:40:07.872911 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:40:07.872936 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:40:07.872943 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:40:07.872951 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:40:07.872958 kernel: audit: type=2000 audit(1736800807.888:1): state=initialized audit_enabled=0 res=1 Jan 13 20:40:07.872966 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:40:07.872973 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:40:07.872981 kernel: cpuidle: using governor menu Jan 13 20:40:07.872988 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:40:07.872996 kernel: dca service started, version 1.12.1 Jan 13 20:40:07.873006 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:40:07.873013 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:40:07.873021 kernel: PCI: Using configuration type 1 for base access Jan 13 20:40:07.873029 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:40:07.873036 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:40:07.873044 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:40:07.873051 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:40:07.873059 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:40:07.873066 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:40:07.873076 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:40:07.873083 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:40:07.873091 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:40:07.873098 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:40:07.873106 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:40:07.873113 kernel: ACPI: Interpreter enabled Jan 13 20:40:07.873121 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:40:07.873128 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:40:07.873136 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:40:07.873146 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:40:07.873154 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:40:07.873161 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:40:07.873333 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:40:07.873461 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:40:07.873587 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:40:07.873597 kernel: PCI host bridge to bus 0000:00 Jan 13 20:40:07.873732 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:40:07.873844 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:40:07.873977 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:40:07.874089 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:40:07.874198 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:40:07.874308 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:40:07.874417 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:40:07.874556 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:40:07.874694 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:40:07.874815 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:40:07.874947 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:40:07.875068 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:40:07.875188 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:40:07.875327 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:40:07.875448 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:40:07.875567 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:40:07.875697 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:40:07.875828 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:40:07.875964 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:40:07.876084 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:40:07.876207 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:40:07.876334 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:40:07.876455 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:40:07.876574 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:40:07.876703 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:40:07.876823 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:40:07.876966 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:40:07.877094 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:40:07.877223 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:40:07.877343 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:40:07.877462 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:40:07.877594 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:40:07.877724 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:40:07.877734 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:40:07.877746 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:40:07.877754 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:40:07.877762 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:40:07.877769 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:40:07.877777 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:40:07.877785 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:40:07.877792 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:40:07.877800 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:40:07.877807 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:40:07.877817 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:40:07.877825 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:40:07.877832 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:40:07.877840 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:40:07.877847 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:40:07.877855 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:40:07.877862 kernel: iommu: Default domain type: Translated Jan 13 20:40:07.877870 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:40:07.877877 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:40:07.877887 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:40:07.877895 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:40:07.877902 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:40:07.878046 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:40:07.878165 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:40:07.878283 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:40:07.878293 kernel: vgaarb: loaded Jan 13 20:40:07.878300 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:40:07.878312 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:40:07.878320 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:40:07.878327 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:40:07.878335 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:40:07.878343 kernel: pnp: PnP ACPI init Jan 13 20:40:07.878469 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:40:07.878481 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:40:07.878489 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:40:07.878500 kernel: NET: Registered PF_INET protocol family Jan 13 20:40:07.878507 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:40:07.878515 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:40:07.878523 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:40:07.878531 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:40:07.878539 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:40:07.878546 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:40:07.878554 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:40:07.878561 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:40:07.878571 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:40:07.878579 kernel: NET: Registered PF_XDP protocol family Jan 13 20:40:07.878699 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:40:07.878810 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:40:07.878932 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:40:07.879044 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:40:07.879153 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:40:07.879263 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:40:07.879276 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:40:07.879284 kernel: Initialise system trusted keyrings Jan 13 20:40:07.879292 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:40:07.879300 kernel: Key type asymmetric registered Jan 13 20:40:07.879307 kernel: Asymmetric key parser 'x509' registered Jan 13 20:40:07.879315 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:40:07.879322 kernel: io scheduler mq-deadline registered Jan 13 20:40:07.879330 kernel: io scheduler kyber registered Jan 13 20:40:07.879337 kernel: io scheduler bfq registered Jan 13 20:40:07.879347 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:40:07.879355 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:40:07.879364 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:40:07.879371 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:40:07.879379 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:40:07.879386 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:40:07.879394 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:40:07.879402 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:40:07.879409 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:40:07.879540 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:40:07.879552 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:40:07.879678 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:40:07.879792 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:40:07 UTC (1736800807) Jan 13 20:40:07.879904 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:40:07.879915 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:40:07.879996 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:40:07.880003 kernel: Segment Routing with IPv6 Jan 13 20:40:07.880022 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:40:07.880037 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:40:07.880051 kernel: Key type dns_resolver registered Jan 13 20:40:07.880065 kernel: IPI shorthand broadcast: enabled Jan 13 20:40:07.880074 kernel: sched_clock: Marking stable (544001892, 106252052)->(698430216, -48176272) Jan 13 20:40:07.880105 kernel: registered taskstats version 1 Jan 13 20:40:07.880132 kernel: Loading compiled-in X.509 certificates Jan 13 20:40:07.880156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:40:07.880165 kernel: Key type .fscrypt registered Jan 13 20:40:07.880186 kernel: Key type fscrypt-provisioning registered Jan 13 20:40:07.880203 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:40:07.880222 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:40:07.880233 kernel: ima: No architecture policies found Jan 13 20:40:07.880243 kernel: clk: Disabling unused clocks Jan 13 20:40:07.880251 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:40:07.880259 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:40:07.880266 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:40:07.880274 kernel: Run /init as init process Jan 13 20:40:07.880285 kernel: with arguments: Jan 13 20:40:07.880292 kernel: /init Jan 13 20:40:07.880299 kernel: with environment: Jan 13 20:40:07.880307 kernel: HOME=/ Jan 13 20:40:07.880314 kernel: TERM=linux Jan 13 20:40:07.880321 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:40:07.880331 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:40:07.880341 systemd[1]: Detected virtualization kvm. Jan 13 20:40:07.880352 systemd[1]: Detected architecture x86-64. Jan 13 20:40:07.880360 systemd[1]: Running in initrd. Jan 13 20:40:07.880368 systemd[1]: No hostname configured, using default hostname. Jan 13 20:40:07.880375 systemd[1]: Hostname set to <localhost>. Jan 13 20:40:07.880383 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:40:07.880391 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:40:07.880399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:07.880407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:07.880419 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:40:07.880438 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:40:07.880449 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:40:07.880457 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:40:07.880467 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:40:07.880477 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:40:07.880486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:07.880494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:07.880502 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:40:07.880510 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:40:07.880518 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:40:07.880527 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:40:07.880536 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:40:07.880548 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:40:07.880558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:40:07.880567 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:40:07.880575 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:07.880583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:07.880594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:07.880602 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:40:07.880610 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:40:07.880629 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:40:07.880637 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:40:07.880646 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:40:07.880654 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:40:07.880662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:40:07.880670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:07.880679 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:40:07.880687 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:07.880695 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:40:07.880729 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 20:40:07.880750 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:40:07.880761 systemd-journald[192]: Journal started Jan 13 20:40:07.880781 systemd-journald[192]: Runtime Journal (/run/log/journal/b520f8c667ca413c8e3c15861f43f45b) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:40:07.881805 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:40:07.910547 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:40:07.910563 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:40:07.913123 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:40:07.914079 kernel: Bridge firewalling registered Jan 13 20:40:07.924329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:07.925629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:07.929694 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:40:07.942095 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:07.945961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:07.947756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:40:07.951443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:40:07.956571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:07.959445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:07.960061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:07.974086 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:40:07.975452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:07.979562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:40:07.984653 dracut-cmdline[229]: dracut-dracut-053 Jan 13 20:40:07.987211 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:40:08.032333 systemd-resolved[234]: Positive Trust Anchors: Jan 13 20:40:08.032348 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:40:08.032382 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:40:08.035016 systemd-resolved[234]: Defaulting to hostname 'linux'. Jan 13 20:40:08.036025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:40:08.041276 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:08.073945 kernel: SCSI subsystem initialized Jan 13 20:40:08.082939 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:40:08.093946 kernel: iscsi: registered transport (tcp) Jan 13 20:40:08.113952 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:40:08.113976 kernel: QLogic iSCSI HBA Driver Jan 13 20:40:08.155490 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:40:08.171125 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:40:08.195170 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:40:08.195212 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:40:08.196290 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:40:08.235960 kernel: raid6: avx2x4 gen() 29613 MB/s Jan 13 20:40:08.252947 kernel: raid6: avx2x2 gen() 29986 MB/s Jan 13 20:40:08.270037 kernel: raid6: avx2x1 gen() 24864 MB/s Jan 13 20:40:08.270055 kernel: raid6: using algorithm avx2x2 gen() 29986 MB/s Jan 13 20:40:08.288053 kernel: raid6: .... xor() 19107 MB/s, rmw enabled Jan 13 20:40:08.288074 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:40:08.307942 kernel: xor: automatically using best checksumming function avx Jan 13 20:40:08.459950 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:40:08.470915 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:40:08.491100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:08.502391 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 20:40:08.507030 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:08.514061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:40:08.525554 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 13 20:40:08.554013 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:40:08.563088 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:40:08.624321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:08.634368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:40:08.647284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:40:08.650478 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:40:08.652990 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:08.655914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:40:08.665974 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:40:08.684204 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:40:08.684393 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:40:08.684410 kernel: GPT:9289727 != 19775487 Jan 13 20:40:08.684424 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:40:08.684445 kernel: GPT:9289727 != 19775487 Jan 13 20:40:08.684459 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:40:08.684473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:40:08.684487 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:40:08.670170 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:40:08.682831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:40:08.695357 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:40:08.703504 kernel: libata version 3.00 loaded. Jan 13 20:40:08.695425 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:08.698875 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:08.700311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:40:08.700380 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:08.716169 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (474) Jan 13 20:40:08.716194 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (471) Jan 13 20:40:08.701988 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:08.719543 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:40:08.736629 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:40:08.736656 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:40:08.736858 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:40:08.738092 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:40:08.738108 kernel: AES CTR mode by8 optimization enabled Jan 13 20:40:08.738121 kernel: scsi host0: ahci Jan 13 20:40:08.738306 kernel: scsi host1: ahci Jan 13 20:40:08.738471 kernel: scsi host2: ahci Jan 13 20:40:08.738660 kernel: scsi host3: ahci Jan 13 20:40:08.739004 kernel: scsi host4: ahci Jan 13 20:40:08.739196 kernel: scsi host5: ahci Jan 13 20:40:08.739380 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:40:08.739397 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:40:08.739412 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:40:08.739426 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:40:08.739440 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:40:08.739454 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:40:08.720736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:08.743370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:40:08.750197 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:40:08.780912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:08.795138 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:40:08.799854 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:40:08.800311 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:40:08.821078 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:40:08.824238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:08.834779 disk-uuid[552]: Primary Header is updated. Jan 13 20:40:08.834779 disk-uuid[552]: Secondary Entries is updated. Jan 13 20:40:08.834779 disk-uuid[552]: Secondary Header is updated. Jan 13 20:40:08.839640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:40:08.845461 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:09.046957 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:40:09.047053 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:09.047942 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:09.047971 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:09.048941 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:09.049949 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:09.050943 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:40:09.050963 kernel: ata3.00: applying bridge limits Jan 13 20:40:09.051965 kernel: ata3.00: configured for UDMA/100 Jan 13 20:40:09.053971 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:40:09.099964 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:40:09.113610 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:40:09.113624 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:40:09.845538 disk-uuid[558]: The operation has completed successfully. Jan 13 20:40:09.847091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:40:09.870252 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:40:09.870370 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:40:09.905134 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:40:09.908461 sh[589]: Success Jan 13 20:40:09.919963 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:40:09.949525 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:40:09.960203 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:40:09.962986 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:40:09.973126 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:40:09.973161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:09.973175 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:40:09.974138 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:40:09.975476 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:40:09.979323 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:40:09.981479 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:40:09.997054 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:40:09.999543 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:40:10.006524 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:40:10.006556 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:10.006582 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:40:10.009939 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:40:10.017983 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:40:10.019791 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:40:10.031148 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:40:10.042117 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:40:10.098325 ignition[688]: Ignition 2.20.0 Jan 13 20:40:10.098337 ignition[688]: Stage: fetch-offline Jan 13 20:40:10.098375 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:10.098384 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:40:10.098473 ignition[688]: parsed url from cmdline: "" Jan 13 20:40:10.098477 ignition[688]: no config URL provided Jan 13 20:40:10.098482 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:40:10.098490 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:40:10.098517 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 13 20:40:10.098522 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:40:10.108484 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:40:10.108529 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 13 20:40:10.111320 ignition[688]: parsing config with SHA512: ab20b7451345b6134e6131a5e36c47587a2f3bdf70f7fe74dfcb15b43681068ee5792c3479d9b086741cadd5eca67cd667ecfd9733cea306ebdb8f0711791f6a Jan 13 20:40:10.113852 unknown[688]: fetched base config from "system" Jan 13 20:40:10.113863 unknown[688]: fetched user config from "qemu" Jan 13 20:40:10.114102 ignition[688]: fetch-offline: fetch-offline passed Jan 13 20:40:10.114165 ignition[688]: Ignition finished successfully Jan 13 20:40:10.119089 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:40:10.119799 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:40:10.146801 systemd-networkd[778]: lo: Link UP Jan 13 20:40:10.146810 systemd-networkd[778]: lo: Gained carrier Jan 13 20:40:10.149610 systemd-networkd[778]: Enumeration completed Jan 13 20:40:10.149712 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:40:10.151477 systemd[1]: Reached target network.target - Network. Jan 13 20:40:10.152199 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:40:10.152356 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:10.152361 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:40:10.153530 systemd-networkd[778]: eth0: Link UP Jan 13 20:40:10.153534 systemd-networkd[778]: eth0: Gained carrier Jan 13 20:40:10.153540 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:10.162058 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:40:10.167973 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:40:10.174733 ignition[781]: Ignition 2.20.0 Jan 13 20:40:10.174743 ignition[781]: Stage: kargs Jan 13 20:40:10.174885 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:10.174895 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:40:10.178328 ignition[781]: kargs: kargs passed Jan 13 20:40:10.178372 ignition[781]: Ignition finished successfully Jan 13 20:40:10.182365 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:40:10.193093 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:40:10.205275 ignition[790]: Ignition 2.20.0 Jan 13 20:40:10.205286 ignition[790]: Stage: disks Jan 13 20:40:10.205434 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:10.205445 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:40:10.206049 ignition[790]: disks: disks passed Jan 13 20:40:10.206091 ignition[790]: Ignition finished successfully Jan 13 20:40:10.212238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:40:10.212956 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:40:10.214650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:40:10.216814 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:40:10.219250 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:40:10.221192 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:40:10.237118 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:40:10.249366 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.101 Jan 13 20:40:10.249379 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Jan 13 20:40:10.251065 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:40:10.260275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:40:10.275054 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:40:10.361952 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:40:10.362080 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:40:10.362876 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:40:10.371999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:40:10.373151 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:40:10.374946 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:40:10.380031 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) Jan 13 20:40:10.380052 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:40:10.374982 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:40:10.387074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:10.387093 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:40:10.387104 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:40:10.375003 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:40:10.382681 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:40:10.387839 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:40:10.390353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:40:10.424017 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:40:10.429080 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:40:10.432826 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:40:10.436485 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:40:10.514302 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:40:10.519041 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:40:10.520639 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:40:10.527943 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:40:10.544252 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:40:10.547551 ignition[924]: INFO : Ignition 2.20.0 Jan 13 20:40:10.547551 ignition[924]: INFO : Stage: mount Jan 13 20:40:10.549343 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:10.549343 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:40:10.549343 ignition[924]: INFO : mount: mount passed Jan 13 20:40:10.549343 ignition[924]: INFO : Ignition finished successfully Jan 13 20:40:10.554597 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:40:10.563052 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:40:10.972631 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:40:10.990065 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:40:10.995943 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (936) Jan 13 20:40:10.998154 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:40:10.998176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:10.998187 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:40:11.000952 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:40:11.002451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:40:11.022845 ignition[953]: INFO : Ignition 2.20.0 Jan 13 20:40:11.022845 ignition[953]: INFO : Stage: files Jan 13 20:40:11.024758 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:11.024758 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:40:11.027482 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:40:11.028968 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:40:11.028968 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:40:11.033650 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:40:11.035176 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:40:11.036944 unknown[953]: wrote ssh authorized keys file for user: core Jan 13 20:40:11.038205 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:40:11.040477 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:40:11.042330 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:40:11.044398 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:40:11.044398 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:40:11.044398 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:40:11.044398 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:40:11.044398 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:40:11.044398 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:40:11.378329 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:40:11.699783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:40:11.699783 ignition[953]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 20:40:11.703877 ignition[953]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:40:11.703877 ignition[953]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:40:11.703877 ignition[953]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 20:40:11.703877 ignition[953]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:40:11.725364 ignition[953]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:40:11.729675 ignition[953]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:40:11.731410 ignition[953]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:40:11.733035 ignition[953]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:40:11.734875 ignition[953]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:40:11.736570 ignition[953]: INFO : files: files passed Jan 13 20:40:11.737328 ignition[953]: INFO : Ignition finished successfully Jan 13 20:40:11.740522 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:40:11.751212 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:40:11.754283 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:40:11.756966 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:40:11.757969 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:40:11.763032 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:40:11.765322 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:11.765322 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:11.768507 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:11.769025 systemd-networkd[778]: eth0: Gained IPv6LL Jan 13 20:40:11.771872 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:40:11.774558 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:40:11.790131 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:40:11.813726 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:40:11.813870 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:40:11.814602 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:40:11.817457 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:40:11.819422 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:40:11.822355 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:40:11.840020 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:40:11.841727 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:40:11.853799 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:11.854424 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:11.856590 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:40:11.856887 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:40:11.857035 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:40:11.862023 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:40:11.862575 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:40:11.862893 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:40:11.866900 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:40:11.869619 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:40:11.870348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:40:11.873263 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:40:11.875027 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:40:11.877874 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:40:11.878421 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:40:11.878724 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:40:11.878842 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:40:11.884017 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:11.884578 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:11.884859 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:40:11.884977 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:11.889425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:40:11.889555 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:40:11.893436 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:40:11.893593 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:40:11.894275 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:40:11.896770 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:40:11.899969 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:11.901281 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:40:11.903159 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:40:11.905491 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:40:11.905588 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:40:11.907291 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:40:11.907375 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:40:11.909137 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:40:11.909243 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:40:11.911159 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:40:11.911262 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:40:11.925087 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:40:11.925548 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:40:11.925690 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:11.927260 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:40:11.929117 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:40:11.929317 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:11.931359 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:40:11.931498 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:40:11.939806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:40:11.939964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:40:11.943670 ignition[1007]: INFO : Ignition 2.20.0 Jan 13 20:40:11.943670 ignition[1007]: INFO : Stage: umount Jan 13 20:40:11.943670 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:11.943670 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:40:11.948818 ignition[1007]: INFO : umount: umount passed Jan 13 20:40:11.948818 ignition[1007]: INFO : Ignition finished successfully Jan 13 20:40:11.946895 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:40:11.947064 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:40:11.949047 systemd[1]: Stopped target network.target - Network. Jan 13 20:40:11.950532 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:40:11.950592 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:40:11.952447 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:40:11.952504 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:40:11.954405 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:40:11.954459 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:40:11.956374 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:40:11.956431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:40:11.958944 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:40:11.961177 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:40:11.963952 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 13 20:40:11.964185 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:40:11.965914 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:40:11.966075 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:40:11.967555 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:40:11.967594 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:11.979036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:40:11.980069 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:40:11.980148 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:40:11.982913 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:11.985239 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:40:11.985390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:40:11.989972 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:40:11.990056 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:11.992905 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:40:11.993025 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:11.993596 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:40:11.993654 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:12.016187 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:40:12.016381 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:12.017616 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:40:12.017698 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:12.020196 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:40:12.020246 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:12.022189 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:40:12.022249 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:40:12.025857 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:40:12.025914 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:40:12.028628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:40:12.028690 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:12.044042 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:40:12.044478 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:40:12.044552 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:12.044943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:40:12.045000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:12.045696 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:40:12.045825 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:40:12.051643 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:40:12.051767 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:40:12.333981 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:40:12.334163 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:40:12.335426 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:40:12.337803 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:40:12.337887 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:40:12.350102 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:40:12.357415 systemd[1]: Switching root. Jan 13 20:40:12.385577 systemd-journald[192]: Journal stopped Jan 13 20:40:14.117079 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 20:40:14.117139 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:40:14.117158 kernel: SELinux: policy capability open_perms=1 Jan 13 20:40:14.117170 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:40:14.117184 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:40:14.117196 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:40:14.117212 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:40:14.117228 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:40:14.117240 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:40:14.117251 kernel: audit: type=1403 audit(1736800813.272:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:40:14.117264 systemd[1]: Successfully loaded SELinux policy in 57.798ms. Jan 13 20:40:14.117281 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.120ms. Jan 13 20:40:14.117295 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:40:14.117311 systemd[1]: Detected virtualization kvm. Jan 13 20:40:14.117323 systemd[1]: Detected architecture x86-64. Jan 13 20:40:14.117335 systemd[1]: Detected first boot. Jan 13 20:40:14.117347 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:40:14.117361 zram_generator::config[1052]: No configuration found. Jan 13 20:40:14.117383 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:40:14.117395 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:40:14.117407 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:40:14.117419 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:40:14.117431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:40:14.117443 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:40:14.117464 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:40:14.117479 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:40:14.117498 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:40:14.117514 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:40:14.117530 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:40:14.117545 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:40:14.117561 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:14.117577 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:14.117593 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:40:14.117609 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:40:14.117627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:40:14.117647 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:40:14.117663 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:40:14.117678 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:14.117693 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:40:14.117709 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:40:14.117725 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:40:14.117740 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:40:14.117760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:14.117776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:40:14.117792 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:40:14.117807 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:40:14.117823 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:40:14.117839 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:40:14.117856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:14.117872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:14.117887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:14.117903 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:40:14.120951 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:40:14.120975 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:40:14.120991 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:40:14.121008 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:14.121023 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:40:14.121039 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:40:14.121054 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:40:14.121070 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:40:14.121092 systemd[1]: Reached target machines.target - Containers. Jan 13 20:40:14.121108 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:40:14.121124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:40:14.121140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:40:14.121157 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:40:14.121173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:40:14.121188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:40:14.121204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:40:14.121219 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:40:14.121238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:40:14.121254 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:40:14.121269 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:40:14.121284 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:40:14.121299 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:40:14.121313 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:40:14.121327 kernel: fuse: init (API version 7.39) Jan 13 20:40:14.121345 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:40:14.121361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:40:14.121379 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:40:14.121394 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:40:14.121433 systemd-journald[1129]: Collecting audit messages is disabled. Jan 13 20:40:14.121477 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:40:14.121493 kernel: loop: module loaded Jan 13 20:40:14.121508 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:40:14.121523 systemd[1]: Stopped verity-setup.service. Jan 13 20:40:14.121541 systemd-journald[1129]: Journal started Jan 13 20:40:14.121568 systemd-journald[1129]: Runtime Journal (/run/log/journal/b520f8c667ca413c8e3c15861f43f45b) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:40:13.876640 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:40:13.890967 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:40:13.891399 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:40:14.123945 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:14.127613 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:40:14.128506 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:40:14.129967 kernel: ACPI: bus type drm_connector registered Jan 13 20:40:14.130405 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:40:14.131836 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:40:14.132960 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:40:14.134279 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:40:14.135524 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:40:14.136785 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:40:14.138259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:14.139879 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:40:14.140080 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:40:14.141693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:40:14.141858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:40:14.143302 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:40:14.143483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:40:14.144937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:40:14.145103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:40:14.146621 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:40:14.146785 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:40:14.148179 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:40:14.148342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:40:14.149806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:14.151207 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:40:14.152864 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:40:14.165732 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:40:14.177040 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:40:14.179643 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:40:14.180911 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:40:14.180955 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:40:14.183272 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:40:14.185652 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:40:14.187834 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:40:14.189012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:40:14.191117 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:40:14.195027 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:40:14.196518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:40:14.200552 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:40:14.202074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:40:14.208687 systemd-journald[1129]: Time spent on flushing to /var/log/journal/b520f8c667ca413c8e3c15861f43f45b is 21.180ms for 932 entries. Jan 13 20:40:14.208687 systemd-journald[1129]: System Journal (/var/log/journal/b520f8c667ca413c8e3c15861f43f45b) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:40:14.238830 systemd-journald[1129]: Received client request to flush runtime journal. Jan 13 20:40:14.207208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:14.214083 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:40:14.219091 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:40:14.222088 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:40:14.223426 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:40:14.225001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:40:14.228674 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:40:14.233913 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:40:14.240948 kernel: loop0: detected capacity change from 0 to 140992 Jan 13 20:40:14.245458 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:40:14.249967 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:14.251907 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:40:14.267109 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:40:14.268760 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:14.270871 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:40:14.271536 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:40:14.277059 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:40:14.283587 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:40:14.285768 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:40:14.298150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:40:14.312953 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:40:14.320574 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 13 20:40:14.320976 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 13 20:40:14.326751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:14.343955 kernel: loop2: detected capacity change from 0 to 210664 Jan 13 20:40:14.377964 kernel: loop3: detected capacity change from 0 to 140992 Jan 13 20:40:14.388959 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:40:14.399953 kernel: loop5: detected capacity change from 0 to 210664 Jan 13 20:40:14.404824 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:40:14.406023 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 13 20:40:14.410523 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:40:14.410538 systemd[1]: Reloading... Jan 13 20:40:14.476533 zram_generator::config[1225]: No configuration found. Jan 13 20:40:14.514435 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:40:14.585601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:40:14.635497 systemd[1]: Reloading finished in 224 ms. Jan 13 20:40:14.674594 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:40:14.676395 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:40:14.693186 systemd[1]: Starting ensure-sysext.service... Jan 13 20:40:14.695579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:40:14.702354 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:40:14.702369 systemd[1]: Reloading... Jan 13 20:40:14.727200 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:40:14.727677 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:40:14.729034 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:40:14.729534 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 13 20:40:14.729707 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 13 20:40:14.736962 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:40:14.737504 systemd-tmpfiles[1257]: Skipping /boot Jan 13 20:40:14.757999 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:40:14.758150 systemd-tmpfiles[1257]: Skipping /boot Jan 13 20:40:14.760978 zram_generator::config[1284]: No configuration found. Jan 13 20:40:14.878302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:40:14.927561 systemd[1]: Reloading finished in 224 ms. Jan 13 20:40:14.946239 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:40:14.960570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:14.968088 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:40:14.970550 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:40:14.973677 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:40:14.978178 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:40:14.983152 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:14.987107 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:40:14.996247 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:40:14.999661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:14.999890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:40:15.003625 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:40:15.009202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:40:15.021311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:40:15.023115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:40:15.023263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:15.024558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:40:15.024776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:40:15.027580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:40:15.027766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:40:15.030073 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:40:15.030473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:40:15.032670 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:40:15.039896 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 13 20:40:15.042465 augenrules[1353]: No rules Jan 13 20:40:15.043066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:15.043269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:40:15.047240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:40:15.052179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:40:15.055235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:40:15.056957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:40:15.065255 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:40:15.066564 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:15.068616 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:40:15.070919 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:40:15.071290 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:40:15.072879 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:40:15.075016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:40:15.075327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:40:15.077146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:40:15.077451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:40:15.079373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:15.082598 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:40:15.083056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:40:15.086004 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:40:15.101748 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:40:15.116081 systemd[1]: Finished ensure-sysext.service. Jan 13 20:40:15.120010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:15.128154 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:40:15.129550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:40:15.132193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:40:15.135676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:40:15.141988 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Jan 13 20:40:15.148145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:40:15.157222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:40:15.157617 systemd-resolved[1326]: Positive Trust Anchors: Jan 13 20:40:15.157633 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:40:15.157664 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:40:15.159897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:40:15.164332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:40:15.170163 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:40:15.171554 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:40:15.171587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:15.172455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:40:15.174003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:40:15.176368 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:40:15.177083 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:40:15.180550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:40:15.180778 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:40:15.182807 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:40:15.183298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:40:15.186771 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:40:15.187749 systemd-resolved[1326]: Defaulting to hostname 'linux'. Jan 13 20:40:15.192462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:40:15.202079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:40:15.205145 augenrules[1397]: /sbin/augenrules: No change Jan 13 20:40:15.221041 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:40:15.215029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:15.221207 augenrules[1431]: No rules Jan 13 20:40:15.219691 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:40:15.221716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:40:15.221797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:40:15.222356 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:40:15.222677 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:40:15.239983 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:40:15.240941 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:40:15.248045 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:40:15.249160 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:40:15.252048 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:40:15.307307 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:40:15.293627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:15.302308 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:40:15.317120 systemd-networkd[1411]: lo: Link UP Jan 13 20:40:15.317412 systemd-networkd[1411]: lo: Gained carrier Jan 13 20:40:15.319784 systemd-networkd[1411]: Enumeration completed Jan 13 20:40:15.319941 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:40:15.321298 systemd[1]: Reached target network.target - Network. Jan 13 20:40:15.324261 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:15.324266 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:40:15.326247 systemd-networkd[1411]: eth0: Link UP Jan 13 20:40:15.326312 systemd-networkd[1411]: eth0: Gained carrier Jan 13 20:40:15.326391 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:15.359775 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:40:15.370026 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:40:15.370822 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jan 13 20:40:16.342753 systemd-timesyncd[1412]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:40:16.342817 systemd-timesyncd[1412]: Initial clock synchronization to Mon 2025-01-13 20:40:16.342609 UTC. Jan 13 20:40:16.344661 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 13 20:40:16.345430 kernel: kvm_amd: TSC scaling supported Jan 13 20:40:16.345499 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:40:16.345534 kernel: kvm_amd: Nested Paging enabled Jan 13 20:40:16.345569 kernel: kvm_amd: LBR virtualization supported Jan 13 20:40:16.345622 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:40:16.345670 kernel: kvm_amd: Virtual GIF supported Jan 13 20:40:16.363407 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:40:16.374399 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:40:16.376044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:16.381938 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:40:16.398472 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:40:16.412532 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:40:16.421895 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:40:16.453956 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:40:16.456318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:16.457521 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:40:16.458734 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:40:16.460146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:40:16.461731 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:40:16.463135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:40:16.464448 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:40:16.465766 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:40:16.465797 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:40:16.466843 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:40:16.468582 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:40:16.471322 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:40:16.480443 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:40:16.482747 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:40:16.484356 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:40:16.485524 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:40:16.486499 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:40:16.487466 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:40:16.487500 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:40:16.488537 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:40:16.490851 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:40:16.495408 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:40:16.495927 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:40:16.499545 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:40:16.503624 jq[1461]: false Jan 13 20:40:16.503627 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:40:16.505496 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:40:16.516027 dbus-daemon[1460]: [system] SELinux support is enabled Jan 13 20:40:16.518630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:40:16.522625 extend-filesystems[1462]: Found loop3 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found loop4 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found loop5 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found sr0 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda1 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda2 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda3 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found usr Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda4 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda6 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda7 Jan 13 20:40:16.522625 extend-filesystems[1462]: Found vda9 Jan 13 20:40:16.522625 extend-filesystems[1462]: Checking size of /dev/vda9 Jan 13 20:40:16.522379 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:40:16.545702 extend-filesystems[1462]: Resized partition /dev/vda9 Jan 13 20:40:16.547799 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:40:16.527244 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:40:16.547990 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:40:16.534503 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:40:16.536681 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:40:16.539322 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:40:16.545650 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:40:16.546774 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:40:16.553748 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:40:16.557755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:40:16.557974 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:40:16.558312 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:40:16.558524 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:40:16.560322 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:40:16.560534 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:40:16.562408 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Jan 13 20:40:16.562448 jq[1481]: true Jan 13 20:40:16.576551 update_engine[1479]: I20250113 20:40:16.576462 1479 main.cc:92] Flatcar Update Engine starting Jan 13 20:40:16.580507 update_engine[1479]: I20250113 20:40:16.580219 1479 update_check_scheduler.cc:74] Next update check in 5m32s Jan 13 20:40:16.582413 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:40:16.591407 jq[1483]: true Jan 13 20:40:16.594225 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:40:16.602559 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:40:16.604718 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:40:16.608039 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:40:16.608039 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:40:16.608039 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:40:16.607612 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:40:16.614070 extend-filesystems[1462]: Resized filesystem in /dev/vda9 Jan 13 20:40:16.607642 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:40:16.608193 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:40:16.608210 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:40:16.610374 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:40:16.610408 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:40:16.610684 systemd-logind[1468]: New seat seat0. Jan 13 20:40:16.619034 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:40:16.620859 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:40:16.622473 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:40:16.622781 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:40:16.651118 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:40:16.653282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:40:16.655701 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:40:16.658472 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:40:16.779654 containerd[1493]: time="2025-01-13T20:40:16.779571928Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:40:16.799658 containerd[1493]: time="2025-01-13T20:40:16.799625856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:16.801578 containerd[1493]: time="2025-01-13T20:40:16.801539385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:16.801578 containerd[1493]: time="2025-01-13T20:40:16.801568780Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:40:16.801672 containerd[1493]: time="2025-01-13T20:40:16.801588206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.801789113Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.801817636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.801910190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.801924337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.802113361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.802126706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.802140141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.802150120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.802240820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.802494075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803079 containerd[1493]: time="2025-01-13T20:40:16.802611605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:16.803633 containerd[1493]: time="2025-01-13T20:40:16.802623758Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:40:16.803633 containerd[1493]: time="2025-01-13T20:40:16.802720269Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:40:16.803633 containerd[1493]: time="2025-01-13T20:40:16.802773028Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:40:16.834780 containerd[1493]: time="2025-01-13T20:40:16.834695474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:40:16.834780 containerd[1493]: time="2025-01-13T20:40:16.834751690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:40:16.834780 containerd[1493]: time="2025-01-13T20:40:16.834770324Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:40:16.834780 containerd[1493]: time="2025-01-13T20:40:16.834788839Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:40:16.834780 containerd[1493]: time="2025-01-13T20:40:16.834804458Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:40:16.835038 containerd[1493]: time="2025-01-13T20:40:16.834968666Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:40:16.835269 containerd[1493]: time="2025-01-13T20:40:16.835230578Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:40:16.835405 containerd[1493]: time="2025-01-13T20:40:16.835370159Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:40:16.835436 containerd[1493]: time="2025-01-13T20:40:16.835413621Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:40:16.835473 containerd[1493]: time="2025-01-13T20:40:16.835434350Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:40:16.835473 containerd[1493]: time="2025-01-13T20:40:16.835451973Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835473 containerd[1493]: time="2025-01-13T20:40:16.835467251Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835544 containerd[1493]: time="2025-01-13T20:40:16.835482520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835544 containerd[1493]: time="2025-01-13T20:40:16.835499442Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835544 containerd[1493]: time="2025-01-13T20:40:16.835515893Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835544 containerd[1493]: time="2025-01-13T20:40:16.835530901Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835545618Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835560787Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835584612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835601433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835617002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835632752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835655735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835671925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835686984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835702252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835718 containerd[1493]: time="2025-01-13T20:40:16.835718673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835736957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835757446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835773105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835787672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835805716Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835828960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835844449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835857573Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835926262Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835946370Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835959465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835974473Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:40:16.835995 containerd[1493]: time="2025-01-13T20:40:16.835986325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.836304 containerd[1493]: time="2025-01-13T20:40:16.836023344Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:40:16.836304 containerd[1493]: time="2025-01-13T20:40:16.836037260Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:40:16.836304 containerd[1493]: time="2025-01-13T20:40:16.836049884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:40:16.836467 containerd[1493]: time="2025-01-13T20:40:16.836374944Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:40:16.836467 containerd[1493]: time="2025-01-13T20:40:16.836455865Z" level=info msg="Connect containerd service" Jan 13 20:40:16.836669 containerd[1493]: time="2025-01-13T20:40:16.836489118Z" level=info msg="using legacy CRI server" Jan 13 20:40:16.836669 containerd[1493]: time="2025-01-13T20:40:16.836497964Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:40:16.836669 containerd[1493]: time="2025-01-13T20:40:16.836620685Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:40:16.837317 containerd[1493]: time="2025-01-13T20:40:16.837265474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:40:16.837455 containerd[1493]: time="2025-01-13T20:40:16.837421827Z" level=info msg="Start subscribing containerd event" Jan 13 20:40:16.837501 containerd[1493]: time="2025-01-13T20:40:16.837465659Z" level=info msg="Start recovering state" Jan 13 20:40:16.837535 containerd[1493]: time="2025-01-13T20:40:16.837517927Z" level=info msg="Start event monitor" Jan 13 20:40:16.837535 containerd[1493]: time="2025-01-13T20:40:16.837527525Z" level=info msg="Start snapshots syncer" Jan 13 20:40:16.837535 containerd[1493]: time="2025-01-13T20:40:16.837536071Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:40:16.837606 containerd[1493]: time="2025-01-13T20:40:16.837543676Z" level=info msg="Start streaming server" Jan 13 20:40:16.837827 containerd[1493]: time="2025-01-13T20:40:16.837666726Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:40:16.837827 containerd[1493]: time="2025-01-13T20:40:16.837729614Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:40:16.837827 containerd[1493]: time="2025-01-13T20:40:16.837789266Z" level=info msg="containerd successfully booted in 0.061004s" Jan 13 20:40:16.840509 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:40:16.903984 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:40:16.926914 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:40:16.936646 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:40:16.938816 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:54478.service - OpenSSH per-connection server daemon (10.0.0.1:54478). Jan 13 20:40:16.945096 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:40:16.945320 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:40:16.949562 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:40:16.981703 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:40:16.993767 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:40:16.996236 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:40:16.997713 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:40:17.014234 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 54478 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:17.016488 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:17.026744 systemd-logind[1468]: New session 1 of user core. Jan 13 20:40:17.028249 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:40:17.042687 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:40:17.054506 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:40:17.066690 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:40:17.070796 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:40:17.184277 systemd[1549]: Queued start job for default target default.target. Jan 13 20:40:17.193625 systemd[1549]: Created slice app.slice - User Application Slice. Jan 13 20:40:17.193650 systemd[1549]: Reached target paths.target - Paths. Jan 13 20:40:17.193663 systemd[1549]: Reached target timers.target - Timers. Jan 13 20:40:17.195203 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:40:17.207026 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:40:17.207150 systemd[1549]: Reached target sockets.target - Sockets. Jan 13 20:40:17.207170 systemd[1549]: Reached target basic.target - Basic System. Jan 13 20:40:17.207207 systemd[1549]: Reached target default.target - Main User Target. Jan 13 20:40:17.207239 systemd[1549]: Startup finished in 129ms. Jan 13 20:40:17.207871 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:40:17.210840 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:40:17.280721 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:54480.service - OpenSSH per-connection server daemon (10.0.0.1:54480). Jan 13 20:40:17.319344 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 54480 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:17.320755 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:17.324463 systemd-logind[1468]: New session 2 of user core. Jan 13 20:40:17.338516 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:40:17.392586 sshd[1562]: Connection closed by 10.0.0.1 port 54480 Jan 13 20:40:17.392952 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:17.399728 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:54480.service: Deactivated successfully. Jan 13 20:40:17.401154 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:40:17.402529 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:40:17.416597 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:54482.service - OpenSSH per-connection server daemon (10.0.0.1:54482). Jan 13 20:40:17.418681 systemd-logind[1468]: Removed session 2. Jan 13 20:40:17.455302 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 54482 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:17.456839 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:17.460958 systemd-logind[1468]: New session 3 of user core. Jan 13 20:40:17.481550 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:40:17.535401 sshd[1569]: Connection closed by 10.0.0.1 port 54482 Jan 13 20:40:17.535735 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:17.539383 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:54482.service: Deactivated successfully. Jan 13 20:40:17.540936 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:40:17.541544 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:40:17.542358 systemd-logind[1468]: Removed session 3. Jan 13 20:40:17.794682 systemd-networkd[1411]: eth0: Gained IPv6LL Jan 13 20:40:17.798652 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:40:17.800554 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:40:17.817694 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:40:17.820765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:17.823092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:40:17.842089 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:40:17.842318 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:40:17.844102 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:40:17.849943 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:40:18.437883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:18.439529 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:40:18.440765 systemd[1]: Startup finished in 677ms (kernel) + 5.581s (initrd) + 4.234s (userspace) = 10.493s. Jan 13 20:40:18.453180 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:40:18.879197 kubelet[1595]: E0113 20:40:18.878958 1595 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:40:18.883070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:40:18.883268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:40:27.546441 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:45098.service - OpenSSH per-connection server daemon (10.0.0.1:45098). Jan 13 20:40:27.588829 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 45098 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:27.590169 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:27.594052 systemd-logind[1468]: New session 4 of user core. Jan 13 20:40:27.607620 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:40:27.661505 sshd[1611]: Connection closed by 10.0.0.1 port 45098 Jan 13 20:40:27.661829 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:27.674747 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:45098.service: Deactivated successfully. Jan 13 20:40:27.676254 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:40:27.677483 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:40:27.678648 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). Jan 13 20:40:27.679287 systemd-logind[1468]: Removed session 4. Jan 13 20:40:27.720799 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:27.722077 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:27.725938 systemd-logind[1468]: New session 5 of user core. Jan 13 20:40:27.736504 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:40:27.786096 sshd[1618]: Connection closed by 10.0.0.1 port 45106 Jan 13 20:40:27.786574 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:27.799442 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:45106.service: Deactivated successfully. Jan 13 20:40:27.801523 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:40:27.802923 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:40:27.812674 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:45108.service - OpenSSH per-connection server daemon (10.0.0.1:45108). Jan 13 20:40:27.813660 systemd-logind[1468]: Removed session 5. Jan 13 20:40:27.851355 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 45108 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:27.852905 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:27.856537 systemd-logind[1468]: New session 6 of user core. Jan 13 20:40:27.866529 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:40:27.919925 sshd[1625]: Connection closed by 10.0.0.1 port 45108 Jan 13 20:40:27.920497 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:27.933290 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:45108.service: Deactivated successfully. Jan 13 20:40:27.934885 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:40:27.936536 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:40:27.937757 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:45110.service - OpenSSH per-connection server daemon (10.0.0.1:45110). Jan 13 20:40:27.938774 systemd-logind[1468]: Removed session 6. Jan 13 20:40:27.982652 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 45110 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:27.984142 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:27.987930 systemd-logind[1468]: New session 7 of user core. Jan 13 20:40:27.997555 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:40:28.056896 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:40:28.057251 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:28.085074 sudo[1633]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:28.087363 sshd[1632]: Connection closed by 10.0.0.1 port 45110 Jan 13 20:40:28.087839 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:28.107201 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:45110.service: Deactivated successfully. Jan 13 20:40:28.109516 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:40:28.111734 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:40:28.125781 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:45112.service - OpenSSH per-connection server daemon (10.0.0.1:45112). Jan 13 20:40:28.126996 systemd-logind[1468]: Removed session 7. Jan 13 20:40:28.168908 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 45112 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:28.170641 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:28.175420 systemd-logind[1468]: New session 8 of user core. Jan 13 20:40:28.193617 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:40:28.249179 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:40:28.249527 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:28.253560 sudo[1642]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:28.259965 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:40:28.260359 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:28.276772 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:40:28.307965 augenrules[1664]: No rules Jan 13 20:40:28.309699 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:40:28.309930 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:40:28.311249 sudo[1641]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:28.312831 sshd[1640]: Connection closed by 10.0.0.1 port 45112 Jan 13 20:40:28.313158 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:28.323792 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:45112.service: Deactivated successfully. Jan 13 20:40:28.325205 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:40:28.326705 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:40:28.337711 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:45126.service - OpenSSH per-connection server daemon (10.0.0.1:45126). Jan 13 20:40:28.338663 systemd-logind[1468]: Removed session 8. Jan 13 20:40:28.382244 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 45126 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:40:28.383882 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:28.387738 systemd-logind[1468]: New session 9 of user core. Jan 13 20:40:28.402559 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:40:28.457553 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:40:28.457896 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:28.480733 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:40:28.500361 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:40:28.500741 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:40:28.900697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:40:28.908670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:28.997492 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:40:28.997617 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:40:28.997933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:29.010833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:29.028977 systemd[1]: Reloading requested from client PID 1728 ('systemctl') (unit session-9.scope)... Jan 13 20:40:29.028999 systemd[1]: Reloading... Jan 13 20:40:29.110421 zram_generator::config[1766]: No configuration found. Jan 13 20:40:29.798218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:40:29.873020 systemd[1]: Reloading finished in 843 ms. Jan 13 20:40:29.920953 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:40:29.921050 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:40:29.921326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:29.923019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:30.071945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:30.077371 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:40:30.118797 kubelet[1814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:40:30.118797 kubelet[1814]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:40:30.118797 kubelet[1814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:40:30.119315 kubelet[1814]: I0113 20:40:30.118863 1814 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:40:30.566241 kubelet[1814]: I0113 20:40:30.566135 1814 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:40:30.566241 kubelet[1814]: I0113 20:40:30.566170 1814 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:40:30.566434 kubelet[1814]: I0113 20:40:30.566382 1814 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:40:30.579700 kubelet[1814]: I0113 20:40:30.579662 1814 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:40:30.594206 kubelet[1814]: I0113 20:40:30.594169 1814 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:40:30.594412 kubelet[1814]: I0113 20:40:30.594352 1814 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:40:30.594567 kubelet[1814]: I0113 20:40:30.594395 1814 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.101","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:40:30.595001 kubelet[1814]: I0113 20:40:30.594978 1814 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:40:30.595001 kubelet[1814]: I0113 20:40:30.594996 1814 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:40:30.595146 kubelet[1814]: I0113 20:40:30.595120 1814 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:40:30.595753 kubelet[1814]: I0113 20:40:30.595718 1814 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:40:30.595753 kubelet[1814]: I0113 20:40:30.595737 1814 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:40:30.595753 kubelet[1814]: I0113 20:40:30.595757 1814 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:40:30.595853 kubelet[1814]: I0113 20:40:30.595770 1814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:40:30.595853 kubelet[1814]: E0113 20:40:30.595795 1814 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:30.595904 kubelet[1814]: E0113 20:40:30.595885 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:30.599660 kubelet[1814]: I0113 20:40:30.599621 1814 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:40:30.600239 kubelet[1814]: W0113 20:40:30.600205 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:40:30.600239 kubelet[1814]: W0113 20:40:30.600226 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:40:30.600239 kubelet[1814]: E0113 20:40:30.600235 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:40:30.600328 kubelet[1814]: E0113 20:40:30.600253 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:40:30.600925 kubelet[1814]: I0113 20:40:30.600905 1814 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:40:30.600968 kubelet[1814]: W0113 20:40:30.600959 1814 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:40:30.601614 kubelet[1814]: I0113 20:40:30.601597 1814 server.go:1264] "Started kubelet" Jan 13 20:40:30.602180 kubelet[1814]: I0113 20:40:30.601962 1814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:40:30.602320 kubelet[1814]: I0113 20:40:30.602287 1814 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:40:30.602320 kubelet[1814]: I0113 20:40:30.601987 1814 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:40:30.603154 kubelet[1814]: I0113 20:40:30.603136 1814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:40:30.603378 kubelet[1814]: I0113 20:40:30.603352 1814 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:40:30.604468 kubelet[1814]: I0113 20:40:30.604253 1814 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:40:30.604468 kubelet[1814]: I0113 20:40:30.604343 1814 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:40:30.604468 kubelet[1814]: I0113 20:40:30.604439 1814 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:40:30.606335 kubelet[1814]: I0113 20:40:30.606223 1814 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:40:30.606335 kubelet[1814]: W0113 20:40:30.606331 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:40:30.606466 kubelet[1814]: I0113 20:40:30.606339 1814 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:40:30.606466 kubelet[1814]: E0113 20:40:30.606354 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:40:30.606554 kubelet[1814]: E0113 20:40:30.606518 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.101\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:40:30.607009 kubelet[1814]: E0113 20:40:30.606976 1814 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:40:30.607104 kubelet[1814]: E0113 20:40:30.606825 1814 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b27499c2887 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.601570439 +0000 UTC m=+0.519893848,LastTimestamp:2025-01-13 20:40:30.601570439 +0000 UTC m=+0.519893848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.607847 kubelet[1814]: I0113 20:40:30.607823 1814 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:40:30.610236 kubelet[1814]: E0113 20:40:30.610083 1814 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b2749ee5a23 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.606957091 +0000 UTC m=+0.525280500,LastTimestamp:2025-01-13 20:40:30.606957091 +0000 UTC m=+0.525280500,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.621796 kubelet[1814]: I0113 20:40:30.621762 1814 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:40:30.622082 kubelet[1814]: I0113 20:40:30.621916 1814 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:40:30.622082 kubelet[1814]: I0113 20:40:30.621942 1814 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:40:30.623333 kubelet[1814]: E0113 20:40:30.623176 1814 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b274aa186da default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.101 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.618699482 +0000 UTC m=+0.537022891,LastTimestamp:2025-01-13 20:40:30.618699482 +0000 UTC m=+0.537022891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.626717 kubelet[1814]: E0113 20:40:30.626649 1814 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b274aa1a3b2 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.101 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.618706866 +0000 UTC m=+0.537030275,LastTimestamp:2025-01-13 20:40:30.618706866 +0000 UTC m=+0.537030275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.630830 kubelet[1814]: E0113 20:40:30.630729 1814 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b274aa1b45e default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.101 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.618711134 +0000 UTC m=+0.537034543,LastTimestamp:2025-01-13 20:40:30.618711134 +0000 UTC m=+0.537034543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.705982 kubelet[1814]: I0113 20:40:30.705941 1814 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.101" Jan 13 20:40:30.709825 kubelet[1814]: E0113 20:40:30.709803 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.101" Jan 13 20:40:30.709883 kubelet[1814]: E0113 20:40:30.709788 1814 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.101.181a5b274aa186da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b274aa186da default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.101 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.618699482 +0000 UTC m=+0.537022891,LastTimestamp:2025-01-13 20:40:30.705897579 +0000 UTC m=+0.624220988,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.713053 kubelet[1814]: E0113 20:40:30.712987 1814 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.101.181a5b274aa1a3b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b274aa1a3b2 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.101 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.618706866 +0000 UTC m=+0.537030275,LastTimestamp:2025-01-13 20:40:30.705908109 +0000 UTC m=+0.624231518,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.716217 kubelet[1814]: E0113 20:40:30.716129 1814 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.101.181a5b274aa1b45e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.101.181a5b274aa1b45e default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.101 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.618711134 +0000 UTC m=+0.537034543,LastTimestamp:2025-01-13 20:40:30.705911345 +0000 UTC m=+0.624234754,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:30.811338 kubelet[1814]: E0113 20:40:30.811295 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.101\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jan 13 20:40:30.910875 kubelet[1814]: I0113 20:40:30.910764 1814 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.101" Jan 13 20:40:31.472782 kubelet[1814]: I0113 20:40:31.472720 1814 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.101" Jan 13 20:40:31.473858 kubelet[1814]: I0113 20:40:31.473789 1814 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:40:31.474304 containerd[1493]: time="2025-01-13T20:40:31.474264662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:40:31.474645 kubelet[1814]: I0113 20:40:31.474563 1814 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:40:31.494591 kubelet[1814]: I0113 20:40:31.494574 1814 policy_none.go:49] "None policy: Start" Jan 13 20:40:31.495103 kubelet[1814]: I0113 20:40:31.495090 1814 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:40:31.495154 kubelet[1814]: I0113 20:40:31.495109 1814 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:40:31.495961 kubelet[1814]: I0113 20:40:31.495929 1814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:40:31.497378 kubelet[1814]: I0113 20:40:31.497331 1814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:40:31.497378 kubelet[1814]: I0113 20:40:31.497373 1814 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:40:31.497457 kubelet[1814]: I0113 20:40:31.497403 1814 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:40:31.497479 kubelet[1814]: E0113 20:40:31.497451 1814 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:40:31.567763 kubelet[1814]: I0113 20:40:31.567688 1814 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:40:31.568021 kubelet[1814]: W0113 20:40:31.567984 1814 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:40:31.568021 kubelet[1814]: E0113 20:40:31.567938 1814 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.97:6443/api/v1/namespaces/default/events/10.0.0.101.181a5b274aa1a3b2\": read tcp 10.0.0.101:42568->10.0.0.97:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.101.181a5b274aa1a3b2 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.101,UID:10.0.0.101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.101 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.101,},FirstTimestamp:2025-01-13 20:40:30.618706866 +0000 UTC m=+0.537030275,LastTimestamp:2025-01-13 20:40:30.910732308 +0000 UTC m=+0.829055717,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.101,}" Jan 13 20:40:31.568170 kubelet[1814]: E0113 20:40:31.568010 1814 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:40:31Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:40:31Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:40:31Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:40:31Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"10.0.0.101\": Patch \"https://10.0.0.97:6443/api/v1/nodes/10.0.0.101/status?timeout=10s\": read tcp 10.0.0.101:42568->10.0.0.97:6443: use of closed network connection" Jan 13 20:40:31.586723 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:40:31.596426 kubelet[1814]: E0113 20:40:31.596380 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:31.597196 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:40:31.597822 kubelet[1814]: E0113 20:40:31.597585 1814 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:40:31.613336 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:40:31.614471 kubelet[1814]: I0113 20:40:31.614433 1814 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:40:31.614855 kubelet[1814]: I0113 20:40:31.614659 1814 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:40:31.614855 kubelet[1814]: I0113 20:40:31.614780 1814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:40:31.615900 kubelet[1814]: E0113 20:40:31.615878 1814 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.101\" not found" Jan 13 20:40:31.653191 sudo[1675]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:31.654534 sshd[1674]: Connection closed by 10.0.0.1 port 45126 Jan 13 20:40:31.654930 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:31.659186 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:45126.service: Deactivated successfully. Jan 13 20:40:31.661183 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:40:31.661901 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:40:31.662846 systemd-logind[1468]: Removed session 9. Jan 13 20:40:32.597015 kubelet[1814]: I0113 20:40:32.596966 1814 apiserver.go:52] "Watching apiserver" Jan 13 20:40:32.597015 kubelet[1814]: E0113 20:40:32.597031 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:32.600187 kubelet[1814]: I0113 20:40:32.600153 1814 topology_manager.go:215] "Topology Admit Handler" podUID="d03a867d-df67-4a2e-8a84-1d056bb7a0ed" podNamespace="kube-system" podName="kube-proxy-qc98d" Jan 13 20:40:32.600280 kubelet[1814]: I0113 20:40:32.600239 1814 topology_manager.go:215] "Topology Admit Handler" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" podNamespace="kube-system" podName="cilium-z9g8g" Jan 13 20:40:32.605484 kubelet[1814]: I0113 20:40:32.605438 1814 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:40:32.606925 systemd[1]: Created slice kubepods-besteffort-podd03a867d_df67_4a2e_8a84_1d056bb7a0ed.slice - libcontainer container kubepods-besteffort-podd03a867d_df67_4a2e_8a84_1d056bb7a0ed.slice. Jan 13 20:40:32.615640 kubelet[1814]: I0113 20:40:32.615567 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-run\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615640 kubelet[1814]: I0113 20:40:32.615622 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-hostproc\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615640 kubelet[1814]: I0113 20:40:32.615644 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/336cf812-a290-4ec8-8004-9be7ce272af7-clustermesh-secrets\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615822 kubelet[1814]: I0113 20:40:32.615663 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-config-path\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615822 kubelet[1814]: I0113 20:40:32.615681 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d03a867d-df67-4a2e-8a84-1d056bb7a0ed-kube-proxy\") pod \"kube-proxy-qc98d\" (UID: \"d03a867d-df67-4a2e-8a84-1d056bb7a0ed\") " pod="kube-system/kube-proxy-qc98d" Jan 13 20:40:32.615822 kubelet[1814]: I0113 20:40:32.615697 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkwf6\" (UniqueName: \"kubernetes.io/projected/d03a867d-df67-4a2e-8a84-1d056bb7a0ed-kube-api-access-mkwf6\") pod \"kube-proxy-qc98d\" (UID: \"d03a867d-df67-4a2e-8a84-1d056bb7a0ed\") " pod="kube-system/kube-proxy-qc98d" Jan 13 20:40:32.615822 kubelet[1814]: I0113 20:40:32.615713 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-bpf-maps\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615822 kubelet[1814]: I0113 20:40:32.615759 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-cgroup\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615961 kubelet[1814]: I0113 20:40:32.615790 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d03a867d-df67-4a2e-8a84-1d056bb7a0ed-xtables-lock\") pod \"kube-proxy-qc98d\" (UID: \"d03a867d-df67-4a2e-8a84-1d056bb7a0ed\") " pod="kube-system/kube-proxy-qc98d" Jan 13 20:40:32.615961 kubelet[1814]: I0113 20:40:32.615805 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d03a867d-df67-4a2e-8a84-1d056bb7a0ed-lib-modules\") pod \"kube-proxy-qc98d\" (UID: \"d03a867d-df67-4a2e-8a84-1d056bb7a0ed\") " pod="kube-system/kube-proxy-qc98d" Jan 13 20:40:32.615961 kubelet[1814]: I0113 20:40:32.615821 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-kernel\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615961 kubelet[1814]: I0113 20:40:32.615839 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-hubble-tls\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615961 kubelet[1814]: I0113 20:40:32.615855 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-etc-cni-netd\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.615961 kubelet[1814]: I0113 20:40:32.615870 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-net\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.616121 kubelet[1814]: I0113 20:40:32.615883 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-xtables-lock\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.616121 kubelet[1814]: I0113 20:40:32.615896 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6pc6\" (UniqueName: \"kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-kube-api-access-n6pc6\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.616121 kubelet[1814]: I0113 20:40:32.615913 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cni-path\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.616121 kubelet[1814]: I0113 20:40:32.615961 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-lib-modules\") pod \"cilium-z9g8g\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " pod="kube-system/cilium-z9g8g" Jan 13 20:40:32.620508 systemd[1]: Created slice kubepods-burstable-pod336cf812_a290_4ec8_8004_9be7ce272af7.slice - libcontainer container kubepods-burstable-pod336cf812_a290_4ec8_8004_9be7ce272af7.slice. Jan 13 20:40:32.919002 kubelet[1814]: E0113 20:40:32.918864 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:32.919591 containerd[1493]: time="2025-01-13T20:40:32.919539308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qc98d,Uid:d03a867d-df67-4a2e-8a84-1d056bb7a0ed,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:32.933879 kubelet[1814]: E0113 20:40:32.933837 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:32.934255 containerd[1493]: time="2025-01-13T20:40:32.934218507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z9g8g,Uid:336cf812-a290-4ec8-8004-9be7ce272af7,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:33.597324 kubelet[1814]: E0113 20:40:33.597241 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:34.137837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234961893.mount: Deactivated successfully. Jan 13 20:40:34.150379 containerd[1493]: time="2025-01-13T20:40:34.150317256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:34.152039 containerd[1493]: time="2025-01-13T20:40:34.151957181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:40:34.153075 containerd[1493]: time="2025-01-13T20:40:34.153045352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:34.154709 containerd[1493]: time="2025-01-13T20:40:34.154658688Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:34.155634 containerd[1493]: time="2025-01-13T20:40:34.155595374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:40:34.157451 containerd[1493]: time="2025-01-13T20:40:34.157401882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:34.158093 containerd[1493]: time="2025-01-13T20:40:34.158067390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.238429016s" Jan 13 20:40:34.160341 containerd[1493]: time="2025-01-13T20:40:34.160311328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.226026707s" Jan 13 20:40:34.343403 containerd[1493]: time="2025-01-13T20:40:34.341611471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:34.343403 containerd[1493]: time="2025-01-13T20:40:34.343338530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:34.343403 containerd[1493]: time="2025-01-13T20:40:34.343350732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:34.343650 containerd[1493]: time="2025-01-13T20:40:34.343461931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:34.345809 containerd[1493]: time="2025-01-13T20:40:34.345073183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:34.345809 containerd[1493]: time="2025-01-13T20:40:34.345177578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:34.345809 containerd[1493]: time="2025-01-13T20:40:34.345193418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:34.345809 containerd[1493]: time="2025-01-13T20:40:34.345266315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:34.482543 systemd[1]: Started cri-containerd-fa26f864499e10bb7e7ca341823bc58d7d3bdd47ae882ce42034eda27530739e.scope - libcontainer container fa26f864499e10bb7e7ca341823bc58d7d3bdd47ae882ce42034eda27530739e. Jan 13 20:40:34.486298 systemd[1]: Started cri-containerd-276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7.scope - libcontainer container 276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7. Jan 13 20:40:34.507437 containerd[1493]: time="2025-01-13T20:40:34.507209005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qc98d,Uid:d03a867d-df67-4a2e-8a84-1d056bb7a0ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa26f864499e10bb7e7ca341823bc58d7d3bdd47ae882ce42034eda27530739e\"" Jan 13 20:40:34.509558 kubelet[1814]: E0113 20:40:34.509538 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:34.511295 containerd[1493]: time="2025-01-13T20:40:34.510677310Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:40:34.516462 containerd[1493]: time="2025-01-13T20:40:34.516417084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z9g8g,Uid:336cf812-a290-4ec8-8004-9be7ce272af7,Namespace:kube-system,Attempt:0,} returns sandbox id \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\"" Jan 13 20:40:34.517013 kubelet[1814]: E0113 20:40:34.516990 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:34.597927 kubelet[1814]: E0113 20:40:34.597853 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:35.594712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987654144.mount: Deactivated successfully. Jan 13 20:40:35.598465 kubelet[1814]: E0113 20:40:35.598442 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:36.598825 kubelet[1814]: E0113 20:40:36.598769 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:36.812049 containerd[1493]: time="2025-01-13T20:40:36.811994741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:36.874445 containerd[1493]: time="2025-01-13T20:40:36.874277093Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 20:40:36.898067 containerd[1493]: time="2025-01-13T20:40:36.897988931Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:36.963322 containerd[1493]: time="2025-01-13T20:40:36.963258555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:36.964357 containerd[1493]: time="2025-01-13T20:40:36.964291923Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.453572815s" Jan 13 20:40:36.964357 containerd[1493]: time="2025-01-13T20:40:36.964346976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 20:40:36.965413 containerd[1493]: time="2025-01-13T20:40:36.965362721Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:40:36.969052 containerd[1493]: time="2025-01-13T20:40:36.969008037Z" level=info msg="CreateContainer within sandbox \"fa26f864499e10bb7e7ca341823bc58d7d3bdd47ae882ce42034eda27530739e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:40:37.130736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569857157.mount: Deactivated successfully. Jan 13 20:40:37.245949 containerd[1493]: time="2025-01-13T20:40:37.245898946Z" level=info msg="CreateContainer within sandbox \"fa26f864499e10bb7e7ca341823bc58d7d3bdd47ae882ce42034eda27530739e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f644674ebbc10158546e90d3ae141a84b7409c2210cac240b12103a2316d918\"" Jan 13 20:40:37.246408 containerd[1493]: time="2025-01-13T20:40:37.246374799Z" level=info msg="StartContainer for \"7f644674ebbc10158546e90d3ae141a84b7409c2210cac240b12103a2316d918\"" Jan 13 20:40:37.290512 systemd[1]: Started cri-containerd-7f644674ebbc10158546e90d3ae141a84b7409c2210cac240b12103a2316d918.scope - libcontainer container 7f644674ebbc10158546e90d3ae141a84b7409c2210cac240b12103a2316d918. Jan 13 20:40:37.579588 containerd[1493]: time="2025-01-13T20:40:37.578890464Z" level=info msg="StartContainer for \"7f644674ebbc10158546e90d3ae141a84b7409c2210cac240b12103a2316d918\" returns successfully" Jan 13 20:40:37.599572 kubelet[1814]: E0113 20:40:37.599478 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:38.588002 kubelet[1814]: E0113 20:40:38.587969 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:38.599736 kubelet[1814]: E0113 20:40:38.599647 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:38.660481 kubelet[1814]: I0113 20:40:38.660379 1814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qc98d" podStartSLOduration=6.205437701 podStartE2EDuration="8.660361078s" podCreationTimestamp="2025-01-13 20:40:30 +0000 UTC" firstStartedPulling="2025-01-13 20:40:34.510262522 +0000 UTC m=+4.428585931" lastFinishedPulling="2025-01-13 20:40:36.965185899 +0000 UTC m=+6.883509308" observedRunningTime="2025-01-13 20:40:38.660203222 +0000 UTC m=+8.578526631" watchObservedRunningTime="2025-01-13 20:40:38.660361078 +0000 UTC m=+8.578684487" Jan 13 20:40:39.590745 kubelet[1814]: E0113 20:40:39.590439 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:39.599780 kubelet[1814]: E0113 20:40:39.599746 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:40.600693 kubelet[1814]: E0113 20:40:40.600634 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:41.601732 kubelet[1814]: E0113 20:40:41.601692 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:42.355488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370689977.mount: Deactivated successfully. Jan 13 20:40:42.602778 kubelet[1814]: E0113 20:40:42.602728 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:43.603706 kubelet[1814]: E0113 20:40:43.603671 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:44.603982 kubelet[1814]: E0113 20:40:44.603898 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:45.548528 containerd[1493]: time="2025-01-13T20:40:45.548442159Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:45.549209 containerd[1493]: time="2025-01-13T20:40:45.549148303Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735375" Jan 13 20:40:45.550434 containerd[1493]: time="2025-01-13T20:40:45.550377177Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:45.552036 containerd[1493]: time="2025-01-13T20:40:45.551995192Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.586600431s" Jan 13 20:40:45.552036 containerd[1493]: time="2025-01-13T20:40:45.552033914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:40:45.554650 containerd[1493]: time="2025-01-13T20:40:45.554595919Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:40:45.569860 containerd[1493]: time="2025-01-13T20:40:45.569807466Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\"" Jan 13 20:40:45.570496 containerd[1493]: time="2025-01-13T20:40:45.570453197Z" level=info msg="StartContainer for \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\"" Jan 13 20:40:45.603560 systemd[1]: Started cri-containerd-40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c.scope - libcontainer container 40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c. Jan 13 20:40:45.604092 kubelet[1814]: E0113 20:40:45.604070 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:45.633994 containerd[1493]: time="2025-01-13T20:40:45.633938825Z" level=info msg="StartContainer for \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\" returns successfully" Jan 13 20:40:45.645338 systemd[1]: cri-containerd-40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c.scope: Deactivated successfully. Jan 13 20:40:46.316881 containerd[1493]: time="2025-01-13T20:40:46.316802709Z" level=info msg="shim disconnected" id=40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c namespace=k8s.io Jan 13 20:40:46.316881 containerd[1493]: time="2025-01-13T20:40:46.316865236Z" level=warning msg="cleaning up after shim disconnected" id=40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c namespace=k8s.io Jan 13 20:40:46.316881 containerd[1493]: time="2025-01-13T20:40:46.316881507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:46.563806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c-rootfs.mount: Deactivated successfully. Jan 13 20:40:46.602723 kubelet[1814]: E0113 20:40:46.602666 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:46.604378 kubelet[1814]: E0113 20:40:46.604320 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:46.604940 containerd[1493]: time="2025-01-13T20:40:46.604849780Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:40:46.625407 containerd[1493]: time="2025-01-13T20:40:46.625340367Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\"" Jan 13 20:40:46.626020 containerd[1493]: time="2025-01-13T20:40:46.625955230Z" level=info msg="StartContainer for \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\"" Jan 13 20:40:46.941546 systemd[1]: Started cri-containerd-b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f.scope - libcontainer container b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f. Jan 13 20:40:46.982179 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:40:46.982515 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:46.982592 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:46.990898 containerd[1493]: time="2025-01-13T20:40:46.990721557Z" level=info msg="StartContainer for \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\" returns successfully" Jan 13 20:40:46.993011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:46.993282 systemd[1]: cri-containerd-b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f.scope: Deactivated successfully. Jan 13 20:40:47.009686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:47.163821 containerd[1493]: time="2025-01-13T20:40:47.163563518Z" level=info msg="shim disconnected" id=b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f namespace=k8s.io Jan 13 20:40:47.163821 containerd[1493]: time="2025-01-13T20:40:47.163624151Z" level=warning msg="cleaning up after shim disconnected" id=b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f namespace=k8s.io Jan 13 20:40:47.163821 containerd[1493]: time="2025-01-13T20:40:47.163632367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:47.563672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f-rootfs.mount: Deactivated successfully. Jan 13 20:40:47.604493 kubelet[1814]: E0113 20:40:47.604442 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:47.605643 kubelet[1814]: E0113 20:40:47.605622 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:47.607251 containerd[1493]: time="2025-01-13T20:40:47.607208390Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:40:47.633023 containerd[1493]: time="2025-01-13T20:40:47.632969852Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\"" Jan 13 20:40:47.633525 containerd[1493]: time="2025-01-13T20:40:47.633500427Z" level=info msg="StartContainer for \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\"" Jan 13 20:40:47.668528 systemd[1]: Started cri-containerd-a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1.scope - libcontainer container a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1. Jan 13 20:40:47.783318 systemd[1]: cri-containerd-a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1.scope: Deactivated successfully. Jan 13 20:40:47.783854 containerd[1493]: time="2025-01-13T20:40:47.783578567Z" level=info msg="StartContainer for \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\" returns successfully" Jan 13 20:40:47.825271 containerd[1493]: time="2025-01-13T20:40:47.825148606Z" level=info msg="shim disconnected" id=a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1 namespace=k8s.io Jan 13 20:40:47.825271 containerd[1493]: time="2025-01-13T20:40:47.825201245Z" level=warning msg="cleaning up after shim disconnected" id=a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1 namespace=k8s.io Jan 13 20:40:47.825271 containerd[1493]: time="2025-01-13T20:40:47.825209159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:48.563663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1-rootfs.mount: Deactivated successfully. Jan 13 20:40:48.604983 kubelet[1814]: E0113 20:40:48.604933 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:48.609149 kubelet[1814]: E0113 20:40:48.609107 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:48.610874 containerd[1493]: time="2025-01-13T20:40:48.610838715Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:40:48.628084 containerd[1493]: time="2025-01-13T20:40:48.628027991Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\"" Jan 13 20:40:48.628580 containerd[1493]: time="2025-01-13T20:40:48.628558145Z" level=info msg="StartContainer for \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\"" Jan 13 20:40:48.659515 systemd[1]: Started cri-containerd-f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf.scope - libcontainer container f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf. Jan 13 20:40:48.686475 systemd[1]: cri-containerd-f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf.scope: Deactivated successfully. Jan 13 20:40:48.689759 containerd[1493]: time="2025-01-13T20:40:48.689727350Z" level=info msg="StartContainer for \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\" returns successfully" Jan 13 20:40:48.715617 containerd[1493]: time="2025-01-13T20:40:48.715552040Z" level=info msg="shim disconnected" id=f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf namespace=k8s.io Jan 13 20:40:48.715617 containerd[1493]: time="2025-01-13T20:40:48.715602044Z" level=warning msg="cleaning up after shim disconnected" id=f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf namespace=k8s.io Jan 13 20:40:48.715617 containerd[1493]: time="2025-01-13T20:40:48.715615579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:49.564867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf-rootfs.mount: Deactivated successfully. Jan 13 20:40:49.605235 kubelet[1814]: E0113 20:40:49.605181 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:49.612352 kubelet[1814]: E0113 20:40:49.612320 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:49.614021 containerd[1493]: time="2025-01-13T20:40:49.613973466Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:40:49.632239 containerd[1493]: time="2025-01-13T20:40:49.632196587Z" level=info msg="CreateContainer within sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\"" Jan 13 20:40:49.632705 containerd[1493]: time="2025-01-13T20:40:49.632655700Z" level=info msg="StartContainer for \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\"" Jan 13 20:40:49.666519 systemd[1]: Started cri-containerd-29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc.scope - libcontainer container 29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc. Jan 13 20:40:49.696518 containerd[1493]: time="2025-01-13T20:40:49.696455301Z" level=info msg="StartContainer for \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\" returns successfully" Jan 13 20:40:49.902777 kubelet[1814]: I0113 20:40:49.902736 1814 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:40:50.217419 kernel: Initializing XFRM netlink socket Jan 13 20:40:50.596458 kubelet[1814]: E0113 20:40:50.596367 1814 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:50.605884 kubelet[1814]: E0113 20:40:50.605827 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:50.616998 kubelet[1814]: E0113 20:40:50.616966 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:50.630592 kubelet[1814]: I0113 20:40:50.630523 1814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z9g8g" podStartSLOduration=9.595044148 podStartE2EDuration="20.630506088s" podCreationTimestamp="2025-01-13 20:40:30 +0000 UTC" firstStartedPulling="2025-01-13 20:40:34.517468706 +0000 UTC m=+4.435792115" lastFinishedPulling="2025-01-13 20:40:45.552930646 +0000 UTC m=+15.471254055" observedRunningTime="2025-01-13 20:40:50.629841479 +0000 UTC m=+20.548164888" watchObservedRunningTime="2025-01-13 20:40:50.630506088 +0000 UTC m=+20.548829497" Jan 13 20:40:51.606788 kubelet[1814]: E0113 20:40:51.606738 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:51.618548 kubelet[1814]: E0113 20:40:51.618512 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:51.910892 systemd-networkd[1411]: cilium_host: Link UP Jan 13 20:40:51.911117 systemd-networkd[1411]: cilium_net: Link UP Jan 13 20:40:51.911347 systemd-networkd[1411]: cilium_net: Gained carrier Jan 13 20:40:51.911591 systemd-networkd[1411]: cilium_host: Gained carrier Jan 13 20:40:52.012996 systemd-networkd[1411]: cilium_vxlan: Link UP Jan 13 20:40:52.013003 systemd-networkd[1411]: cilium_vxlan: Gained carrier Jan 13 20:40:52.227430 kernel: NET: Registered PF_ALG protocol family Jan 13 20:40:52.607793 kubelet[1814]: E0113 20:40:52.607724 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:52.619505 kubelet[1814]: E0113 20:40:52.619403 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:52.739501 systemd-networkd[1411]: cilium_net: Gained IPv6LL Jan 13 20:40:52.854446 systemd-networkd[1411]: lxc_health: Link UP Jan 13 20:40:52.867512 systemd-networkd[1411]: cilium_host: Gained IPv6LL Jan 13 20:40:52.868405 systemd-networkd[1411]: lxc_health: Gained carrier Jan 13 20:40:53.093066 kubelet[1814]: I0113 20:40:53.093005 1814 topology_manager.go:215] "Topology Admit Handler" podUID="ed0e6474-c69d-40c4-a4de-b63bc69f1dbd" podNamespace="default" podName="nginx-deployment-85f456d6dd-vsl6x" Jan 13 20:40:53.098647 systemd[1]: Created slice kubepods-besteffort-poded0e6474_c69d_40c4_a4de_b63bc69f1dbd.slice - libcontainer container kubepods-besteffort-poded0e6474_c69d_40c4_a4de_b63bc69f1dbd.slice. Jan 13 20:40:53.159178 kubelet[1814]: I0113 20:40:53.159050 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xcv\" (UniqueName: \"kubernetes.io/projected/ed0e6474-c69d-40c4-a4de-b63bc69f1dbd-kube-api-access-c6xcv\") pod \"nginx-deployment-85f456d6dd-vsl6x\" (UID: \"ed0e6474-c69d-40c4-a4de-b63bc69f1dbd\") " pod="default/nginx-deployment-85f456d6dd-vsl6x" Jan 13 20:40:53.402570 containerd[1493]: time="2025-01-13T20:40:53.402519398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vsl6x,Uid:ed0e6474-c69d-40c4-a4de-b63bc69f1dbd,Namespace:default,Attempt:0,}" Jan 13 20:40:53.570611 systemd-networkd[1411]: cilium_vxlan: Gained IPv6LL Jan 13 20:40:53.608702 kubelet[1814]: E0113 20:40:53.608619 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:53.622246 kubelet[1814]: E0113 20:40:53.622178 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:53.940474 systemd-networkd[1411]: lxcc3e4a7dd15a9: Link UP Jan 13 20:40:53.953217 kernel: eth0: renamed from tmpdfe9b Jan 13 20:40:53.958323 systemd-networkd[1411]: lxcc3e4a7dd15a9: Gained carrier Jan 13 20:40:54.468321 systemd-networkd[1411]: lxc_health: Gained IPv6LL Jan 13 20:40:54.609554 kubelet[1814]: E0113 20:40:54.609466 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:54.624204 kubelet[1814]: E0113 20:40:54.624059 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:55.609787 kubelet[1814]: E0113 20:40:55.609671 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:55.630305 kubelet[1814]: E0113 20:40:55.630239 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:55.746691 systemd-networkd[1411]: lxcc3e4a7dd15a9: Gained IPv6LL Jan 13 20:40:56.609948 kubelet[1814]: E0113 20:40:56.609878 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:57.610611 kubelet[1814]: E0113 20:40:57.610511 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:58.610961 kubelet[1814]: E0113 20:40:58.610893 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:59.379932 containerd[1493]: time="2025-01-13T20:40:59.379726486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:59.379932 containerd[1493]: time="2025-01-13T20:40:59.379848488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:59.379932 containerd[1493]: time="2025-01-13T20:40:59.379870129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:59.383923 containerd[1493]: time="2025-01-13T20:40:59.380007410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:59.419812 systemd[1]: Started cri-containerd-dfe9baf47ec6f94a6618087b4beb520a3c28f8e04a1781f515aad5857af39bc3.scope - libcontainer container dfe9baf47ec6f94a6618087b4beb520a3c28f8e04a1781f515aad5857af39bc3. Jan 13 20:40:59.445812 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:40:59.490452 containerd[1493]: time="2025-01-13T20:40:59.490344748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vsl6x,Uid:ed0e6474-c69d-40c4-a4de-b63bc69f1dbd,Namespace:default,Attempt:0,} returns sandbox id \"dfe9baf47ec6f94a6618087b4beb520a3c28f8e04a1781f515aad5857af39bc3\"" Jan 13 20:40:59.493339 containerd[1493]: time="2025-01-13T20:40:59.493243141Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:40:59.612090 kubelet[1814]: E0113 20:40:59.612002 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:00.613380 kubelet[1814]: E0113 20:41:00.612428 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:01.613895 kubelet[1814]: E0113 20:41:01.613781 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:01.950689 update_engine[1479]: I20250113 20:41:01.950444 1479 update_attempter.cc:509] Updating boot flags... Jan 13 20:41:02.006727 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2935) Jan 13 20:41:02.064415 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2934) Jan 13 20:41:02.616336 kubelet[1814]: E0113 20:41:02.616287 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:03.616956 kubelet[1814]: E0113 20:41:03.616884 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:03.714995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3543718620.mount: Deactivated successfully. Jan 13 20:41:04.617462 kubelet[1814]: E0113 20:41:04.617305 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:05.618410 kubelet[1814]: E0113 20:41:05.618352 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:05.857883 containerd[1493]: time="2025-01-13T20:41:05.857814413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:05.858812 containerd[1493]: time="2025-01-13T20:41:05.858723073Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:41:05.860585 containerd[1493]: time="2025-01-13T20:41:05.860530825Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:05.864158 containerd[1493]: time="2025-01-13T20:41:05.864104481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:05.865222 containerd[1493]: time="2025-01-13T20:41:05.865178866Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 6.371883065s" Jan 13 20:41:05.865222 containerd[1493]: time="2025-01-13T20:41:05.865216667Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:41:05.867829 containerd[1493]: time="2025-01-13T20:41:05.867789117Z" level=info msg="CreateContainer within sandbox \"dfe9baf47ec6f94a6618087b4beb520a3c28f8e04a1781f515aad5857af39bc3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:41:05.887398 containerd[1493]: time="2025-01-13T20:41:05.887253628Z" level=info msg="CreateContainer within sandbox \"dfe9baf47ec6f94a6618087b4beb520a3c28f8e04a1781f515aad5857af39bc3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5a04c5678a10e54529dc61672831b12e01b1c39cb30921ce954d3b836ad01340\"" Jan 13 20:41:05.887970 containerd[1493]: time="2025-01-13T20:41:05.887842463Z" level=info msg="StartContainer for \"5a04c5678a10e54529dc61672831b12e01b1c39cb30921ce954d3b836ad01340\"" Jan 13 20:41:05.923638 systemd[1]: Started cri-containerd-5a04c5678a10e54529dc61672831b12e01b1c39cb30921ce954d3b836ad01340.scope - libcontainer container 5a04c5678a10e54529dc61672831b12e01b1c39cb30921ce954d3b836ad01340. Jan 13 20:41:05.953739 containerd[1493]: time="2025-01-13T20:41:05.953699927Z" level=info msg="StartContainer for \"5a04c5678a10e54529dc61672831b12e01b1c39cb30921ce954d3b836ad01340\" returns successfully" Jan 13 20:41:06.618853 kubelet[1814]: E0113 20:41:06.618785 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:06.715900 kubelet[1814]: I0113 20:41:06.715833 1814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-vsl6x" podStartSLOduration=7.342012367 podStartE2EDuration="13.715815452s" podCreationTimestamp="2025-01-13 20:40:53 +0000 UTC" firstStartedPulling="2025-01-13 20:40:59.492671382 +0000 UTC m=+29.410994792" lastFinishedPulling="2025-01-13 20:41:05.866474468 +0000 UTC m=+35.784797877" observedRunningTime="2025-01-13 20:41:06.715567683 +0000 UTC m=+36.633891092" watchObservedRunningTime="2025-01-13 20:41:06.715815452 +0000 UTC m=+36.634138861" Jan 13 20:41:07.620050 kubelet[1814]: E0113 20:41:07.619982 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:08.620725 kubelet[1814]: E0113 20:41:08.620650 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:09.621645 kubelet[1814]: E0113 20:41:09.621565 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:10.596457 kubelet[1814]: E0113 20:41:10.596381 1814 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:10.621968 kubelet[1814]: E0113 20:41:10.621914 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:11.200759 kubelet[1814]: I0113 20:41:11.200698 1814 topology_manager.go:215] "Topology Admit Handler" podUID="6df668a7-2687-4f24-9f65-80bb19742274" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:41:11.207519 systemd[1]: Created slice kubepods-besteffort-pod6df668a7_2687_4f24_9f65_80bb19742274.slice - libcontainer container kubepods-besteffort-pod6df668a7_2687_4f24_9f65_80bb19742274.slice. Jan 13 20:41:11.262858 kubelet[1814]: I0113 20:41:11.262787 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwkx\" (UniqueName: \"kubernetes.io/projected/6df668a7-2687-4f24-9f65-80bb19742274-kube-api-access-hzwkx\") pod \"nfs-server-provisioner-0\" (UID: \"6df668a7-2687-4f24-9f65-80bb19742274\") " pod="default/nfs-server-provisioner-0" Jan 13 20:41:11.262858 kubelet[1814]: I0113 20:41:11.262849 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6df668a7-2687-4f24-9f65-80bb19742274-data\") pod \"nfs-server-provisioner-0\" (UID: \"6df668a7-2687-4f24-9f65-80bb19742274\") " pod="default/nfs-server-provisioner-0" Jan 13 20:41:11.510918 containerd[1493]: time="2025-01-13T20:41:11.510764087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6df668a7-2687-4f24-9f65-80bb19742274,Namespace:default,Attempt:0,}" Jan 13 20:41:11.622077 kubelet[1814]: E0113 20:41:11.622025 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:11.813103 systemd-networkd[1411]: lxc87eaa934d14f: Link UP Jan 13 20:41:11.823437 kernel: eth0: renamed from tmp4c1e4 Jan 13 20:41:11.836628 systemd-networkd[1411]: lxc87eaa934d14f: Gained carrier Jan 13 20:41:12.106115 containerd[1493]: time="2025-01-13T20:41:12.106024280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:12.106115 containerd[1493]: time="2025-01-13T20:41:12.106084454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:12.106115 containerd[1493]: time="2025-01-13T20:41:12.106094272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:12.106308 containerd[1493]: time="2025-01-13T20:41:12.106166278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:12.134562 systemd[1]: Started cri-containerd-4c1e4751bfe4209de75c2564ff1292f1ade8ebfa644527aa52bf70039b391986.scope - libcontainer container 4c1e4751bfe4209de75c2564ff1292f1ade8ebfa644527aa52bf70039b391986. Jan 13 20:41:12.148656 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:41:12.173914 containerd[1493]: time="2025-01-13T20:41:12.173861291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6df668a7-2687-4f24-9f65-80bb19742274,Namespace:default,Attempt:0,} returns sandbox id \"4c1e4751bfe4209de75c2564ff1292f1ade8ebfa644527aa52bf70039b391986\"" Jan 13 20:41:12.175454 containerd[1493]: time="2025-01-13T20:41:12.175421254Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:41:12.622565 kubelet[1814]: E0113 20:41:12.622478 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:13.623161 kubelet[1814]: E0113 20:41:13.623114 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:13.666539 systemd-networkd[1411]: lxc87eaa934d14f: Gained IPv6LL Jan 13 20:41:14.623244 kubelet[1814]: E0113 20:41:14.623192 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:15.624412 kubelet[1814]: E0113 20:41:15.623805 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:15.967546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698885750.mount: Deactivated successfully. Jan 13 20:41:16.624297 kubelet[1814]: E0113 20:41:16.624257 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:17.624748 kubelet[1814]: E0113 20:41:17.624699 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:18.403671 containerd[1493]: time="2025-01-13T20:41:18.403609125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:18.404463 containerd[1493]: time="2025-01-13T20:41:18.404423889Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 20:41:18.405712 containerd[1493]: time="2025-01-13T20:41:18.405669193Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:18.408486 containerd[1493]: time="2025-01-13T20:41:18.408452475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:18.409604 containerd[1493]: time="2025-01-13T20:41:18.409567715Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.234119972s" Jan 13 20:41:18.409604 containerd[1493]: time="2025-01-13T20:41:18.409600296Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:41:18.412218 containerd[1493]: time="2025-01-13T20:41:18.412169004Z" level=info msg="CreateContainer within sandbox \"4c1e4751bfe4209de75c2564ff1292f1ade8ebfa644527aa52bf70039b391986\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:41:18.425194 containerd[1493]: time="2025-01-13T20:41:18.425159385Z" level=info msg="CreateContainer within sandbox \"4c1e4751bfe4209de75c2564ff1292f1ade8ebfa644527aa52bf70039b391986\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6fc87606a9a2de44d365c98e731bcac61355ba71295ca2a6f669717c53a14075\"" Jan 13 20:41:18.425695 containerd[1493]: time="2025-01-13T20:41:18.425654528Z" level=info msg="StartContainer for \"6fc87606a9a2de44d365c98e731bcac61355ba71295ca2a6f669717c53a14075\"" Jan 13 20:41:18.512621 systemd[1]: Started cri-containerd-6fc87606a9a2de44d365c98e731bcac61355ba71295ca2a6f669717c53a14075.scope - libcontainer container 6fc87606a9a2de44d365c98e731bcac61355ba71295ca2a6f669717c53a14075. Jan 13 20:41:18.625824 kubelet[1814]: E0113 20:41:18.625764 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:18.770202 containerd[1493]: time="2025-01-13T20:41:18.769946449Z" level=info msg="StartContainer for \"6fc87606a9a2de44d365c98e731bcac61355ba71295ca2a6f669717c53a14075\" returns successfully" Jan 13 20:41:19.626327 kubelet[1814]: E0113 20:41:19.626274 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:19.786231 kubelet[1814]: I0113 20:41:19.786167 1814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.550787057 podStartE2EDuration="8.786152483s" podCreationTimestamp="2025-01-13 20:41:11 +0000 UTC" firstStartedPulling="2025-01-13 20:41:12.17518034 +0000 UTC m=+42.093503749" lastFinishedPulling="2025-01-13 20:41:18.410545766 +0000 UTC m=+48.328869175" observedRunningTime="2025-01-13 20:41:19.786148044 +0000 UTC m=+49.704471443" watchObservedRunningTime="2025-01-13 20:41:19.786152483 +0000 UTC m=+49.704475892" Jan 13 20:41:20.626967 kubelet[1814]: E0113 20:41:20.626912 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:21.627720 kubelet[1814]: E0113 20:41:21.627686 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:22.628418 kubelet[1814]: E0113 20:41:22.628342 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:23.629382 kubelet[1814]: E0113 20:41:23.629324 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:24.630425 kubelet[1814]: E0113 20:41:24.630333 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:25.631100 kubelet[1814]: E0113 20:41:25.631030 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:26.632114 kubelet[1814]: E0113 20:41:26.632072 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:27.632565 kubelet[1814]: E0113 20:41:27.632494 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:28.633111 kubelet[1814]: E0113 20:41:28.633048 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:28.798377 kubelet[1814]: I0113 20:41:28.798329 1814 topology_manager.go:215] "Topology Admit Handler" podUID="de508c76-acfe-4b58-b085-42d485ac8374" podNamespace="default" podName="test-pod-1" Jan 13 20:41:28.804847 systemd[1]: Created slice kubepods-besteffort-podde508c76_acfe_4b58_b085_42d485ac8374.slice - libcontainer container kubepods-besteffort-podde508c76_acfe_4b58_b085_42d485ac8374.slice. Jan 13 20:41:28.976750 kubelet[1814]: I0113 20:41:28.976562 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpw4\" (UniqueName: \"kubernetes.io/projected/de508c76-acfe-4b58-b085-42d485ac8374-kube-api-access-5bpw4\") pod \"test-pod-1\" (UID: \"de508c76-acfe-4b58-b085-42d485ac8374\") " pod="default/test-pod-1" Jan 13 20:41:28.976750 kubelet[1814]: I0113 20:41:28.976609 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fd7d3049-1ee9-45bc-8996-158b0559476d\" (UniqueName: \"kubernetes.io/nfs/de508c76-acfe-4b58-b085-42d485ac8374-pvc-fd7d3049-1ee9-45bc-8996-158b0559476d\") pod \"test-pod-1\" (UID: \"de508c76-acfe-4b58-b085-42d485ac8374\") " pod="default/test-pod-1" Jan 13 20:41:29.109436 kernel: FS-Cache: Loaded Jan 13 20:41:29.177966 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:41:29.178104 kernel: RPC: Registered udp transport module. Jan 13 20:41:29.178127 kernel: RPC: Registered tcp transport module. Jan 13 20:41:29.178143 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:41:29.178572 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:41:29.439672 kernel: NFS: Registering the id_resolver key type Jan 13 20:41:29.439840 kernel: Key type id_resolver registered Jan 13 20:41:29.439861 kernel: Key type id_legacy registered Jan 13 20:41:29.467185 nfsidmap[3222]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:41:29.472347 nfsidmap[3225]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:41:29.633666 kubelet[1814]: E0113 20:41:29.633588 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:29.708072 containerd[1493]: time="2025-01-13T20:41:29.707924615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:de508c76-acfe-4b58-b085-42d485ac8374,Namespace:default,Attempt:0,}" Jan 13 20:41:29.737053 systemd-networkd[1411]: lxc728bade1c236: Link UP Jan 13 20:41:29.750417 kernel: eth0: renamed from tmp9e92f Jan 13 20:41:29.762776 systemd-networkd[1411]: lxc728bade1c236: Gained carrier Jan 13 20:41:29.954282 containerd[1493]: time="2025-01-13T20:41:29.954147467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:29.954282 containerd[1493]: time="2025-01-13T20:41:29.954264767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:29.954530 containerd[1493]: time="2025-01-13T20:41:29.954290485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:29.954530 containerd[1493]: time="2025-01-13T20:41:29.954438133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:29.971601 systemd[1]: Started cri-containerd-9e92fdb6c084b2a13f31bd8a7eae81b0ad430d437f4d47ca430f4dc8098147e8.scope - libcontainer container 9e92fdb6c084b2a13f31bd8a7eae81b0ad430d437f4d47ca430f4dc8098147e8. Jan 13 20:41:29.983361 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:41:30.009104 containerd[1493]: time="2025-01-13T20:41:30.009047521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:de508c76-acfe-4b58-b085-42d485ac8374,Namespace:default,Attempt:0,} returns sandbox id \"9e92fdb6c084b2a13f31bd8a7eae81b0ad430d437f4d47ca430f4dc8098147e8\"" Jan 13 20:41:30.011018 containerd[1493]: time="2025-01-13T20:41:30.010980462Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:41:30.389113 containerd[1493]: time="2025-01-13T20:41:30.389044413Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:30.389843 containerd[1493]: time="2025-01-13T20:41:30.389762533Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:41:30.392302 containerd[1493]: time="2025-01-13T20:41:30.392251620Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 381.225702ms" Jan 13 20:41:30.392302 containerd[1493]: time="2025-01-13T20:41:30.392285644Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:41:30.394363 containerd[1493]: time="2025-01-13T20:41:30.394335334Z" level=info msg="CreateContainer within sandbox \"9e92fdb6c084b2a13f31bd8a7eae81b0ad430d437f4d47ca430f4dc8098147e8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:41:30.407660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888759732.mount: Deactivated successfully. Jan 13 20:41:30.412247 containerd[1493]: time="2025-01-13T20:41:30.412190263Z" level=info msg="CreateContainer within sandbox \"9e92fdb6c084b2a13f31bd8a7eae81b0ad430d437f4d47ca430f4dc8098147e8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d3af71cb024e26f77de760e785a8021656571c875088e9a1da2afb730ced7822\"" Jan 13 20:41:30.412762 containerd[1493]: time="2025-01-13T20:41:30.412715289Z" level=info msg="StartContainer for \"d3af71cb024e26f77de760e785a8021656571c875088e9a1da2afb730ced7822\"" Jan 13 20:41:30.442554 systemd[1]: Started cri-containerd-d3af71cb024e26f77de760e785a8021656571c875088e9a1da2afb730ced7822.scope - libcontainer container d3af71cb024e26f77de760e785a8021656571c875088e9a1da2afb730ced7822. Jan 13 20:41:30.469734 containerd[1493]: time="2025-01-13T20:41:30.469686384Z" level=info msg="StartContainer for \"d3af71cb024e26f77de760e785a8021656571c875088e9a1da2afb730ced7822\" returns successfully" Jan 13 20:41:30.596143 kubelet[1814]: E0113 20:41:30.596070 1814 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:30.634615 kubelet[1814]: E0113 20:41:30.634551 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:30.808759 kubelet[1814]: I0113 20:41:30.808608 1814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.425964084 podStartE2EDuration="19.808592151s" podCreationTimestamp="2025-01-13 20:41:11 +0000 UTC" firstStartedPulling="2025-01-13 20:41:30.010340259 +0000 UTC m=+59.928663658" lastFinishedPulling="2025-01-13 20:41:30.392968316 +0000 UTC m=+60.311291725" observedRunningTime="2025-01-13 20:41:30.808428734 +0000 UTC m=+60.726752143" watchObservedRunningTime="2025-01-13 20:41:30.808592151 +0000 UTC m=+60.726915560" Jan 13 20:41:31.394574 systemd-networkd[1411]: lxc728bade1c236: Gained IPv6LL Jan 13 20:41:31.635265 kubelet[1814]: E0113 20:41:31.635222 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:32.635566 kubelet[1814]: E0113 20:41:32.635481 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:33.635923 kubelet[1814]: E0113 20:41:33.635875 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:34.110957 containerd[1493]: time="2025-01-13T20:41:34.110902329Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:41:34.118749 containerd[1493]: time="2025-01-13T20:41:34.118714291Z" level=info msg="StopContainer for \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\" with timeout 2 (s)" Jan 13 20:41:34.118933 containerd[1493]: time="2025-01-13T20:41:34.118911823Z" level=info msg="Stop container \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\" with signal terminated" Jan 13 20:41:34.126216 systemd-networkd[1411]: lxc_health: Link DOWN Jan 13 20:41:34.126226 systemd-networkd[1411]: lxc_health: Lost carrier Jan 13 20:41:34.146081 systemd[1]: cri-containerd-29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc.scope: Deactivated successfully. Jan 13 20:41:34.146560 systemd[1]: cri-containerd-29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc.scope: Consumed 9.591s CPU time. Jan 13 20:41:34.168678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc-rootfs.mount: Deactivated successfully. Jan 13 20:41:34.179034 containerd[1493]: time="2025-01-13T20:41:34.178961943Z" level=info msg="shim disconnected" id=29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc namespace=k8s.io Jan 13 20:41:34.179185 containerd[1493]: time="2025-01-13T20:41:34.179034389Z" level=warning msg="cleaning up after shim disconnected" id=29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc namespace=k8s.io Jan 13 20:41:34.179185 containerd[1493]: time="2025-01-13T20:41:34.179046571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:34.197892 containerd[1493]: time="2025-01-13T20:41:34.197849235Z" level=info msg="StopContainer for \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\" returns successfully" Jan 13 20:41:34.198515 containerd[1493]: time="2025-01-13T20:41:34.198495699Z" level=info msg="StopPodSandbox for \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\"" Jan 13 20:41:34.198634 containerd[1493]: time="2025-01-13T20:41:34.198593563Z" level=info msg="Container to stop \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.198634 containerd[1493]: time="2025-01-13T20:41:34.198631845Z" level=info msg="Container to stop \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.198733 containerd[1493]: time="2025-01-13T20:41:34.198640571Z" level=info msg="Container to stop \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.198733 containerd[1493]: time="2025-01-13T20:41:34.198649147Z" level=info msg="Container to stop \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.198733 containerd[1493]: time="2025-01-13T20:41:34.198657823Z" level=info msg="Container to stop \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:34.201016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7-shm.mount: Deactivated successfully. Jan 13 20:41:34.205487 systemd[1]: cri-containerd-276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7.scope: Deactivated successfully. Jan 13 20:41:34.224158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7-rootfs.mount: Deactivated successfully. Jan 13 20:41:34.227892 containerd[1493]: time="2025-01-13T20:41:34.227833409Z" level=info msg="shim disconnected" id=276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7 namespace=k8s.io Jan 13 20:41:34.227995 containerd[1493]: time="2025-01-13T20:41:34.227894483Z" level=warning msg="cleaning up after shim disconnected" id=276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7 namespace=k8s.io Jan 13 20:41:34.227995 containerd[1493]: time="2025-01-13T20:41:34.227904903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:34.241974 containerd[1493]: time="2025-01-13T20:41:34.241910788Z" level=info msg="TearDown network for sandbox \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" successfully" Jan 13 20:41:34.241974 containerd[1493]: time="2025-01-13T20:41:34.241953738Z" level=info msg="StopPodSandbox for \"276dd89265c8e1790d8269850428e4ac3e7a73d1c1bdae1c7912a2c0d56798e7\" returns successfully" Jan 13 20:41:34.411718 kubelet[1814]: I0113 20:41:34.411529 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-hubble-tls\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.411718 kubelet[1814]: I0113 20:41:34.411598 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6pc6\" (UniqueName: \"kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-kube-api-access-n6pc6\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.411718 kubelet[1814]: I0113 20:41:34.411629 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-config-path\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.411718 kubelet[1814]: I0113 20:41:34.411648 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-hostproc\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.411718 kubelet[1814]: I0113 20:41:34.411671 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/336cf812-a290-4ec8-8004-9be7ce272af7-clustermesh-secrets\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.411718 kubelet[1814]: I0113 20:41:34.411690 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-run\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412041 kubelet[1814]: I0113 20:41:34.411708 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-bpf-maps\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412041 kubelet[1814]: I0113 20:41:34.411725 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-cgroup\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412041 kubelet[1814]: I0113 20:41:34.411744 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cni-path\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412041 kubelet[1814]: I0113 20:41:34.411763 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-net\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412041 kubelet[1814]: I0113 20:41:34.411780 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-etc-cni-netd\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412041 kubelet[1814]: I0113 20:41:34.411797 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-lib-modules\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412225 kubelet[1814]: I0113 20:41:34.411817 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-kernel\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412225 kubelet[1814]: I0113 20:41:34.411835 1814 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-xtables-lock\") pod \"336cf812-a290-4ec8-8004-9be7ce272af7\" (UID: \"336cf812-a290-4ec8-8004-9be7ce272af7\") " Jan 13 20:41:34.412225 kubelet[1814]: I0113 20:41:34.411881 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.412225 kubelet[1814]: I0113 20:41:34.412201 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.412356 kubelet[1814]: I0113 20:41:34.412260 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-hostproc" (OuterVolumeSpecName: "hostproc") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.414613 kubelet[1814]: I0113 20:41:34.414577 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.414680 kubelet[1814]: I0113 20:41:34.414638 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.414680 kubelet[1814]: I0113 20:41:34.414657 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.414680 kubelet[1814]: I0113 20:41:34.414673 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cni-path" (OuterVolumeSpecName: "cni-path") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.414780 kubelet[1814]: I0113 20:41:34.414706 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.414780 kubelet[1814]: I0113 20:41:34.414721 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.414780 kubelet[1814]: I0113 20:41:34.414739 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:34.416295 kubelet[1814]: I0113 20:41:34.416019 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:34.416805 kubelet[1814]: I0113 20:41:34.416755 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:41:34.417483 systemd[1]: var-lib-kubelet-pods-336cf812\x2da290\x2d4ec8\x2d8004\x2d9be7ce272af7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn6pc6.mount: Deactivated successfully. Jan 13 20:41:34.417632 systemd[1]: var-lib-kubelet-pods-336cf812\x2da290\x2d4ec8\x2d8004\x2d9be7ce272af7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:41:34.419041 kubelet[1814]: I0113 20:41:34.419004 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336cf812-a290-4ec8-8004-9be7ce272af7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:41:34.419041 kubelet[1814]: I0113 20:41:34.419009 1814 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-kube-api-access-n6pc6" (OuterVolumeSpecName: "kube-api-access-n6pc6") pod "336cf812-a290-4ec8-8004-9be7ce272af7" (UID: "336cf812-a290-4ec8-8004-9be7ce272af7"). InnerVolumeSpecName "kube-api-access-n6pc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512785 1814 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-net\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512827 1814 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-etc-cni-netd\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512835 1814 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-lib-modules\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512844 1814 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-host-proc-sys-kernel\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512852 1814 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-xtables-lock\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512860 1814 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-n6pc6\" (UniqueName: \"kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-kube-api-access-n6pc6\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512868 1814 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-config-path\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.512861 kubelet[1814]: I0113 20:41:34.512877 1814 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-hostproc\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.513175 kubelet[1814]: I0113 20:41:34.512885 1814 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/336cf812-a290-4ec8-8004-9be7ce272af7-clustermesh-secrets\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.513175 kubelet[1814]: I0113 20:41:34.512892 1814 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/336cf812-a290-4ec8-8004-9be7ce272af7-hubble-tls\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.513175 kubelet[1814]: I0113 20:41:34.512899 1814 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-run\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.513175 kubelet[1814]: I0113 20:41:34.512906 1814 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-bpf-maps\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.513175 kubelet[1814]: I0113 20:41:34.512913 1814 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cilium-cgroup\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.513175 kubelet[1814]: I0113 20:41:34.512920 1814 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/336cf812-a290-4ec8-8004-9be7ce272af7-cni-path\") on node \"10.0.0.101\" DevicePath \"\"" Jan 13 20:41:34.636062 kubelet[1814]: E0113 20:41:34.635980 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:34.810662 kubelet[1814]: I0113 20:41:34.810545 1814 scope.go:117] "RemoveContainer" containerID="29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc" Jan 13 20:41:34.811915 containerd[1493]: time="2025-01-13T20:41:34.811858231Z" level=info msg="RemoveContainer for \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\"" Jan 13 20:41:34.816541 systemd[1]: Removed slice kubepods-burstable-pod336cf812_a290_4ec8_8004_9be7ce272af7.slice - libcontainer container kubepods-burstable-pod336cf812_a290_4ec8_8004_9be7ce272af7.slice. Jan 13 20:41:34.816656 systemd[1]: kubepods-burstable-pod336cf812_a290_4ec8_8004_9be7ce272af7.slice: Consumed 9.783s CPU time. Jan 13 20:41:34.880367 containerd[1493]: time="2025-01-13T20:41:34.880290935Z" level=info msg="RemoveContainer for \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\" returns successfully" Jan 13 20:41:34.880836 kubelet[1814]: I0113 20:41:34.880711 1814 scope.go:117] "RemoveContainer" containerID="f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf" Jan 13 20:41:34.882470 containerd[1493]: time="2025-01-13T20:41:34.882268329Z" level=info msg="RemoveContainer for \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\"" Jan 13 20:41:34.944406 containerd[1493]: time="2025-01-13T20:41:34.944343923Z" level=info msg="RemoveContainer for \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\" returns successfully" Jan 13 20:41:34.944716 kubelet[1814]: I0113 20:41:34.944684 1814 scope.go:117] "RemoveContainer" containerID="a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1" Jan 13 20:41:34.945714 containerd[1493]: time="2025-01-13T20:41:34.945673400Z" level=info msg="RemoveContainer for \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\"" Jan 13 20:41:34.995893 containerd[1493]: time="2025-01-13T20:41:34.995733909Z" level=info msg="RemoveContainer for \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\" returns successfully" Jan 13 20:41:34.996113 kubelet[1814]: I0113 20:41:34.996068 1814 scope.go:117] "RemoveContainer" containerID="b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f" Jan 13 20:41:34.997226 containerd[1493]: time="2025-01-13T20:41:34.997192689Z" level=info msg="RemoveContainer for \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\"" Jan 13 20:41:35.031515 containerd[1493]: time="2025-01-13T20:41:35.031441998Z" level=info msg="RemoveContainer for \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\" returns successfully" Jan 13 20:41:35.031833 kubelet[1814]: I0113 20:41:35.031799 1814 scope.go:117] "RemoveContainer" containerID="40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c" Jan 13 20:41:35.033104 containerd[1493]: time="2025-01-13T20:41:35.033069424Z" level=info msg="RemoveContainer for \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\"" Jan 13 20:41:35.058988 containerd[1493]: time="2025-01-13T20:41:35.058936200Z" level=info msg="RemoveContainer for \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\" returns successfully" Jan 13 20:41:35.059337 kubelet[1814]: I0113 20:41:35.059222 1814 scope.go:117] "RemoveContainer" containerID="29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc" Jan 13 20:41:35.059538 containerd[1493]: time="2025-01-13T20:41:35.059473148Z" level=error msg="ContainerStatus for \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\": not found" Jan 13 20:41:35.059708 kubelet[1814]: E0113 20:41:35.059678 1814 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\": not found" containerID="29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc" Jan 13 20:41:35.059813 kubelet[1814]: I0113 20:41:35.059717 1814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc"} err="failed to get container status \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"29781c9b26bb3b6ff90b13affb1dda03092521c1960bb014e2b1b4bff0f6e2dc\": not found" Jan 13 20:41:35.059813 kubelet[1814]: I0113 20:41:35.059811 1814 scope.go:117] "RemoveContainer" containerID="f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf" Jan 13 20:41:35.060038 containerd[1493]: time="2025-01-13T20:41:35.059979529Z" level=error msg="ContainerStatus for \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\": not found" Jan 13 20:41:35.060108 kubelet[1814]: E0113 20:41:35.060083 1814 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\": not found" containerID="f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf" Jan 13 20:41:35.060142 kubelet[1814]: I0113 20:41:35.060105 1814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf"} err="failed to get container status \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7e3f6fbaae819a43bb946b8c917b97b9221417e1ce24caece28cdf4a5976ebf\": not found" Jan 13 20:41:35.060142 kubelet[1814]: I0113 20:41:35.060117 1814 scope.go:117] "RemoveContainer" containerID="a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1" Jan 13 20:41:35.060290 containerd[1493]: time="2025-01-13T20:41:35.060240910Z" level=error msg="ContainerStatus for \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\": not found" Jan 13 20:41:35.060365 kubelet[1814]: E0113 20:41:35.060347 1814 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\": not found" containerID="a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1" Jan 13 20:41:35.060418 kubelet[1814]: I0113 20:41:35.060362 1814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1"} err="failed to get container status \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a82e61df6a5caba3e8fdde521216db9d44a6c3bf581aa62a5ea10d7c7a95e8f1\": not found" Jan 13 20:41:35.060418 kubelet[1814]: I0113 20:41:35.060374 1814 scope.go:117] "RemoveContainer" containerID="b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f" Jan 13 20:41:35.060521 containerd[1493]: time="2025-01-13T20:41:35.060498714Z" level=error msg="ContainerStatus for \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\": not found" Jan 13 20:41:35.060609 kubelet[1814]: E0113 20:41:35.060584 1814 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\": not found" containerID="b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f" Jan 13 20:41:35.060609 kubelet[1814]: I0113 20:41:35.060602 1814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f"} err="failed to get container status \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5dc09e8ba01abd8c62e47462daf4407097d63c59dd8efe910e9f43c0aeab01f\": not found" Jan 13 20:41:35.060609 kubelet[1814]: I0113 20:41:35.060615 1814 scope.go:117] "RemoveContainer" containerID="40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c" Jan 13 20:41:35.060874 containerd[1493]: time="2025-01-13T20:41:35.060730799Z" level=error msg="ContainerStatus for \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\": not found" Jan 13 20:41:35.060917 kubelet[1814]: E0113 20:41:35.060850 1814 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\": not found" containerID="40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c" Jan 13 20:41:35.060917 kubelet[1814]: I0113 20:41:35.060875 1814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c"} err="failed to get container status \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"40631099e98a853590128c7b21ff60bba592f9360872a1565f4b44639cc49a7c\": not found" Jan 13 20:41:35.096327 systemd[1]: var-lib-kubelet-pods-336cf812\x2da290\x2d4ec8\x2d8004\x2d9be7ce272af7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:41:35.500618 kubelet[1814]: I0113 20:41:35.500576 1814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" path="/var/lib/kubelet/pods/336cf812-a290-4ec8-8004-9be7ce272af7/volumes" Jan 13 20:41:35.637004 kubelet[1814]: E0113 20:41:35.636911 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:36.630366 kubelet[1814]: E0113 20:41:36.630312 1814 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:41:36.637734 kubelet[1814]: E0113 20:41:36.637689 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:37.637891 kubelet[1814]: E0113 20:41:37.637832 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:38.097191 kubelet[1814]: I0113 20:41:38.097053 1814 topology_manager.go:215] "Topology Admit Handler" podUID="649f86d2-6c84-45e3-b773-46e6c1fc85b3" podNamespace="kube-system" podName="cilium-operator-599987898-9n7dm" Jan 13 20:41:38.097191 kubelet[1814]: E0113 20:41:38.097114 1814 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" containerName="mount-cgroup" Jan 13 20:41:38.097191 kubelet[1814]: E0113 20:41:38.097127 1814 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" containerName="mount-bpf-fs" Jan 13 20:41:38.097191 kubelet[1814]: E0113 20:41:38.097135 1814 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" containerName="clean-cilium-state" Jan 13 20:41:38.097191 kubelet[1814]: E0113 20:41:38.097143 1814 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" containerName="cilium-agent" Jan 13 20:41:38.097191 kubelet[1814]: E0113 20:41:38.097152 1814 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" containerName="apply-sysctl-overwrites" Jan 13 20:41:38.097191 kubelet[1814]: I0113 20:41:38.097180 1814 memory_manager.go:354] "RemoveStaleState removing state" podUID="336cf812-a290-4ec8-8004-9be7ce272af7" containerName="cilium-agent" Jan 13 20:41:38.102640 systemd[1]: Created slice kubepods-besteffort-pod649f86d2_6c84_45e3_b773_46e6c1fc85b3.slice - libcontainer container kubepods-besteffort-pod649f86d2_6c84_45e3_b773_46e6c1fc85b3.slice. Jan 13 20:41:38.184381 kubelet[1814]: I0113 20:41:38.184317 1814 topology_manager.go:215] "Topology Admit Handler" podUID="67ecf141-ac0b-4870-bf2c-30c265701290" podNamespace="kube-system" podName="cilium-qwjtc" Jan 13 20:41:38.190123 systemd[1]: Created slice kubepods-burstable-pod67ecf141_ac0b_4870_bf2c_30c265701290.slice - libcontainer container kubepods-burstable-pod67ecf141_ac0b_4870_bf2c_30c265701290.slice. Jan 13 20:41:38.235416 kubelet[1814]: I0113 20:41:38.235357 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/649f86d2-6c84-45e3-b773-46e6c1fc85b3-cilium-config-path\") pod \"cilium-operator-599987898-9n7dm\" (UID: \"649f86d2-6c84-45e3-b773-46e6c1fc85b3\") " pod="kube-system/cilium-operator-599987898-9n7dm" Jan 13 20:41:38.235416 kubelet[1814]: I0113 20:41:38.235419 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwx4z\" (UniqueName: \"kubernetes.io/projected/649f86d2-6c84-45e3-b773-46e6c1fc85b3-kube-api-access-cwx4z\") pod \"cilium-operator-599987898-9n7dm\" (UID: \"649f86d2-6c84-45e3-b773-46e6c1fc85b3\") " pod="kube-system/cilium-operator-599987898-9n7dm" Jan 13 20:41:38.335872 kubelet[1814]: I0113 20:41:38.335796 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67ecf141-ac0b-4870-bf2c-30c265701290-cilium-ipsec-secrets\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.335872 kubelet[1814]: I0113 20:41:38.335854 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-cni-path\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.335872 kubelet[1814]: I0113 20:41:38.335875 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67ecf141-ac0b-4870-bf2c-30c265701290-cilium-config-path\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336077 kubelet[1814]: I0113 20:41:38.335890 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-xtables-lock\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336077 kubelet[1814]: I0113 20:41:38.335903 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-host-proc-sys-kernel\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336077 kubelet[1814]: I0113 20:41:38.335918 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67ecf141-ac0b-4870-bf2c-30c265701290-hubble-tls\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336077 kubelet[1814]: I0113 20:41:38.335946 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-etc-cni-netd\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336077 kubelet[1814]: I0113 20:41:38.335970 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-lib-modules\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336077 kubelet[1814]: I0113 20:41:38.335984 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67ecf141-ac0b-4870-bf2c-30c265701290-clustermesh-secrets\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336202 kubelet[1814]: I0113 20:41:38.335999 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-cilium-run\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336202 kubelet[1814]: I0113 20:41:38.336012 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-bpf-maps\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336202 kubelet[1814]: I0113 20:41:38.336025 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-hostproc\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336202 kubelet[1814]: I0113 20:41:38.336037 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-cilium-cgroup\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336202 kubelet[1814]: I0113 20:41:38.336050 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67ecf141-ac0b-4870-bf2c-30c265701290-host-proc-sys-net\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.336202 kubelet[1814]: I0113 20:41:38.336083 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bh6\" (UniqueName: \"kubernetes.io/projected/67ecf141-ac0b-4870-bf2c-30c265701290-kube-api-access-c7bh6\") pod \"cilium-qwjtc\" (UID: \"67ecf141-ac0b-4870-bf2c-30c265701290\") " pod="kube-system/cilium-qwjtc" Jan 13 20:41:38.638516 kubelet[1814]: E0113 20:41:38.638452 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:38.705785 kubelet[1814]: E0113 20:41:38.705728 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:38.706490 containerd[1493]: time="2025-01-13T20:41:38.706451025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9n7dm,Uid:649f86d2-6c84-45e3-b773-46e6c1fc85b3,Namespace:kube-system,Attempt:0,}" Jan 13 20:41:38.799705 kubelet[1814]: E0113 20:41:38.799661 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:38.800292 containerd[1493]: time="2025-01-13T20:41:38.800242367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwjtc,Uid:67ecf141-ac0b-4870-bf2c-30c265701290,Namespace:kube-system,Attempt:0,}" Jan 13 20:41:39.634137 containerd[1493]: time="2025-01-13T20:41:39.633432814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:39.634137 containerd[1493]: time="2025-01-13T20:41:39.634102982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:39.634137 containerd[1493]: time="2025-01-13T20:41:39.634119533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:39.634319 containerd[1493]: time="2025-01-13T20:41:39.634209523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:39.638608 kubelet[1814]: E0113 20:41:39.638576 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:39.657067 containerd[1493]: time="2025-01-13T20:41:39.656937942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:39.657067 containerd[1493]: time="2025-01-13T20:41:39.656988416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:39.657067 containerd[1493]: time="2025-01-13T20:41:39.656999207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:39.657361 containerd[1493]: time="2025-01-13T20:41:39.657069609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:39.658530 systemd[1]: Started cri-containerd-6ef793beaabc39f41a92ab74c28f000a5c073b211af0f825ea7e630019853079.scope - libcontainer container 6ef793beaabc39f41a92ab74c28f000a5c073b211af0f825ea7e630019853079. Jan 13 20:41:39.681578 systemd[1]: Started cri-containerd-6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb.scope - libcontainer container 6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb. Jan 13 20:41:39.702768 containerd[1493]: time="2025-01-13T20:41:39.702667382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9n7dm,Uid:649f86d2-6c84-45e3-b773-46e6c1fc85b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ef793beaabc39f41a92ab74c28f000a5c073b211af0f825ea7e630019853079\"" Jan 13 20:41:39.703516 kubelet[1814]: E0113 20:41:39.703494 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:39.704730 containerd[1493]: time="2025-01-13T20:41:39.704694577Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:41:39.707285 containerd[1493]: time="2025-01-13T20:41:39.707237711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwjtc,Uid:67ecf141-ac0b-4870-bf2c-30c265701290,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\"" Jan 13 20:41:39.708027 kubelet[1814]: E0113 20:41:39.707993 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:39.709971 containerd[1493]: time="2025-01-13T20:41:39.709942360Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:41:40.092459 containerd[1493]: time="2025-01-13T20:41:40.092357595Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f\"" Jan 13 20:41:40.093096 containerd[1493]: time="2025-01-13T20:41:40.093065384Z" level=info msg="StartContainer for \"879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f\"" Jan 13 20:41:40.119571 systemd[1]: Started cri-containerd-879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f.scope - libcontainer container 879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f. Jan 13 20:41:40.156320 systemd[1]: cri-containerd-879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f.scope: Deactivated successfully. Jan 13 20:41:40.178576 containerd[1493]: time="2025-01-13T20:41:40.178520142Z" level=info msg="StartContainer for \"879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f\" returns successfully" Jan 13 20:41:40.212599 containerd[1493]: time="2025-01-13T20:41:40.212526339Z" level=info msg="shim disconnected" id=879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f namespace=k8s.io Jan 13 20:41:40.212599 containerd[1493]: time="2025-01-13T20:41:40.212591812Z" level=warning msg="cleaning up after shim disconnected" id=879f316853cff8ab32fa3e8137ab2c2dcbae0bbe573ef0caf0bb847cad16851f namespace=k8s.io Jan 13 20:41:40.212599 containerd[1493]: time="2025-01-13T20:41:40.212600649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:40.498798 kubelet[1814]: E0113 20:41:40.498633 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:40.639833 kubelet[1814]: E0113 20:41:40.639767 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:40.825796 kubelet[1814]: E0113 20:41:40.825674 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:40.827600 containerd[1493]: time="2025-01-13T20:41:40.827568822Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:41:40.842205 containerd[1493]: time="2025-01-13T20:41:40.842157378Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457\"" Jan 13 20:41:40.842725 containerd[1493]: time="2025-01-13T20:41:40.842693615Z" level=info msg="StartContainer for \"759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457\"" Jan 13 20:41:40.873558 systemd[1]: Started cri-containerd-759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457.scope - libcontainer container 759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457. Jan 13 20:41:40.901375 containerd[1493]: time="2025-01-13T20:41:40.901309610Z" level=info msg="StartContainer for \"759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457\" returns successfully" Jan 13 20:41:40.910584 systemd[1]: cri-containerd-759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457.scope: Deactivated successfully. Jan 13 20:41:40.935866 containerd[1493]: time="2025-01-13T20:41:40.935799725Z" level=info msg="shim disconnected" id=759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457 namespace=k8s.io Jan 13 20:41:40.935866 containerd[1493]: time="2025-01-13T20:41:40.935857794Z" level=warning msg="cleaning up after shim disconnected" id=759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457 namespace=k8s.io Jan 13 20:41:40.935866 containerd[1493]: time="2025-01-13T20:41:40.935866050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:41.621200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-759a200002c2f057a938f80b81e885d00881adddc6cbdd48a64f71e970cdd457-rootfs.mount: Deactivated successfully. Jan 13 20:41:41.631147 kubelet[1814]: E0113 20:41:41.631105 1814 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:41:41.640449 kubelet[1814]: E0113 20:41:41.640372 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:41.828945 kubelet[1814]: E0113 20:41:41.828912 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:41.830522 containerd[1493]: time="2025-01-13T20:41:41.830490545Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:41:42.641450 kubelet[1814]: E0113 20:41:42.641355 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:43.199875 containerd[1493]: time="2025-01-13T20:41:43.199826037Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3\"" Jan 13 20:41:43.200590 containerd[1493]: time="2025-01-13T20:41:43.200538925Z" level=info msg="StartContainer for \"e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3\"" Jan 13 20:41:43.232502 systemd[1]: Started cri-containerd-e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3.scope - libcontainer container e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3. Jan 13 20:41:43.263949 systemd[1]: cri-containerd-e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3.scope: Deactivated successfully. Jan 13 20:41:43.348665 containerd[1493]: time="2025-01-13T20:41:43.348593147Z" level=info msg="StartContainer for \"e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3\" returns successfully" Jan 13 20:41:43.366025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3-rootfs.mount: Deactivated successfully. Jan 13 20:41:43.484932 containerd[1493]: time="2025-01-13T20:41:43.484762343Z" level=info msg="shim disconnected" id=e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3 namespace=k8s.io Jan 13 20:41:43.484932 containerd[1493]: time="2025-01-13T20:41:43.484814972Z" level=warning msg="cleaning up after shim disconnected" id=e9d4eb6a4bb766dc24434b0a3fa941c59560e560a3d8c2dc205612f40b338bd3 namespace=k8s.io Jan 13 20:41:43.484932 containerd[1493]: time="2025-01-13T20:41:43.484823528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:43.641658 kubelet[1814]: E0113 20:41:43.641577 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:43.836563 kubelet[1814]: E0113 20:41:43.836439 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:43.838054 containerd[1493]: time="2025-01-13T20:41:43.838010774Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:41:43.850075 kubelet[1814]: I0113 20:41:43.850031 1814 setters.go:580] "Node became not ready" node="10.0.0.101" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:41:43Z","lastTransitionTime":"2025-01-13T20:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:41:44.012671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886391566.mount: Deactivated successfully. Jan 13 20:41:44.066719 containerd[1493]: time="2025-01-13T20:41:44.066644619Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b\"" Jan 13 20:41:44.067207 containerd[1493]: time="2025-01-13T20:41:44.067183782Z" level=info msg="StartContainer for \"fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b\"" Jan 13 20:41:44.097517 systemd[1]: Started cri-containerd-fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b.scope - libcontainer container fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b. Jan 13 20:41:44.119765 systemd[1]: cri-containerd-fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b.scope: Deactivated successfully. Jan 13 20:41:44.124308 containerd[1493]: time="2025-01-13T20:41:44.124265161Z" level=info msg="StartContainer for \"fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b\" returns successfully" Jan 13 20:41:44.149420 containerd[1493]: time="2025-01-13T20:41:44.149343549Z" level=info msg="shim disconnected" id=fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b namespace=k8s.io Jan 13 20:41:44.149420 containerd[1493]: time="2025-01-13T20:41:44.149412509Z" level=warning msg="cleaning up after shim disconnected" id=fd1025be08157d50315a79e2246cd2a3da80db505ab69ab3bd8fc71807ecc49b namespace=k8s.io Jan 13 20:41:44.149420 containerd[1493]: time="2025-01-13T20:41:44.149424792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:44.642766 kubelet[1814]: E0113 20:41:44.642723 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:44.840372 kubelet[1814]: E0113 20:41:44.840328 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:44.842655 containerd[1493]: time="2025-01-13T20:41:44.842598553Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:41:45.166413 containerd[1493]: time="2025-01-13T20:41:45.166317472Z" level=info msg="CreateContainer within sandbox \"6a0bd0adb984f3312aff8c79fc6f3df71b48f0b4ec45c80a8c953aaa6d7c1ccb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bbac8a46b4e70fedec8c161773dfb51009dc1ff89299cbf2794b0ad4734252c9\"" Jan 13 20:41:45.166962 containerd[1493]: time="2025-01-13T20:41:45.166925954Z" level=info msg="StartContainer for \"bbac8a46b4e70fedec8c161773dfb51009dc1ff89299cbf2794b0ad4734252c9\"" Jan 13 20:41:45.196544 systemd[1]: Started cri-containerd-bbac8a46b4e70fedec8c161773dfb51009dc1ff89299cbf2794b0ad4734252c9.scope - libcontainer container bbac8a46b4e70fedec8c161773dfb51009dc1ff89299cbf2794b0ad4734252c9. Jan 13 20:41:45.285932 containerd[1493]: time="2025-01-13T20:41:45.285872712Z" level=info msg="StartContainer for \"bbac8a46b4e70fedec8c161773dfb51009dc1ff89299cbf2794b0ad4734252c9\" returns successfully" Jan 13 20:41:45.643822 kubelet[1814]: E0113 20:41:45.643739 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:45.650428 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:41:45.846266 kubelet[1814]: E0113 20:41:45.846203 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:45.864734 kubelet[1814]: I0113 20:41:45.864644 1814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qwjtc" podStartSLOduration=8.864622969 podStartE2EDuration="8.864622969s" podCreationTimestamp="2025-01-13 20:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:41:45.863902587 +0000 UTC m=+75.782226026" watchObservedRunningTime="2025-01-13 20:41:45.864622969 +0000 UTC m=+75.782946388" Jan 13 20:41:46.644627 kubelet[1814]: E0113 20:41:46.644549 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:46.847692 kubelet[1814]: E0113 20:41:46.847655 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:47.440439 containerd[1493]: time="2025-01-13T20:41:47.440382754Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:47.456848 containerd[1493]: time="2025-01-13T20:41:47.456784923Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906597" Jan 13 20:41:47.504915 containerd[1493]: time="2025-01-13T20:41:47.504849529Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:47.508265 containerd[1493]: time="2025-01-13T20:41:47.508209404Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.803486173s" Jan 13 20:41:47.508265 containerd[1493]: time="2025-01-13T20:41:47.508256873Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:41:47.515943 containerd[1493]: time="2025-01-13T20:41:47.515892572Z" level=info msg="CreateContainer within sandbox \"6ef793beaabc39f41a92ab74c28f000a5c073b211af0f825ea7e630019853079\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:41:47.645119 kubelet[1814]: E0113 20:41:47.645034 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:47.849730 kubelet[1814]: E0113 20:41:47.849362 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:48.360170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85911512.mount: Deactivated successfully. Jan 13 20:41:48.585917 containerd[1493]: time="2025-01-13T20:41:48.585861206Z" level=info msg="CreateContainer within sandbox \"6ef793beaabc39f41a92ab74c28f000a5c073b211af0f825ea7e630019853079\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"88cff0819445f044d7e33ca56f71262fe0d3e9186d508e6925b45d6068c80ee8\"" Jan 13 20:41:48.586523 containerd[1493]: time="2025-01-13T20:41:48.586488223Z" level=info msg="StartContainer for \"88cff0819445f044d7e33ca56f71262fe0d3e9186d508e6925b45d6068c80ee8\"" Jan 13 20:41:48.625641 systemd[1]: Started cri-containerd-88cff0819445f044d7e33ca56f71262fe0d3e9186d508e6925b45d6068c80ee8.scope - libcontainer container 88cff0819445f044d7e33ca56f71262fe0d3e9186d508e6925b45d6068c80ee8. Jan 13 20:41:48.647275 kubelet[1814]: E0113 20:41:48.647244 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:48.736189 containerd[1493]: time="2025-01-13T20:41:48.736124551Z" level=info msg="StartContainer for \"88cff0819445f044d7e33ca56f71262fe0d3e9186d508e6925b45d6068c80ee8\" returns successfully" Jan 13 20:41:48.852533 kubelet[1814]: E0113 20:41:48.852503 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:48.923456 kubelet[1814]: I0113 20:41:48.923284 1814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9n7dm" podStartSLOduration=4.117847447 podStartE2EDuration="11.923266277s" podCreationTimestamp="2025-01-13 20:41:37 +0000 UTC" firstStartedPulling="2025-01-13 20:41:39.704323982 +0000 UTC m=+69.622647391" lastFinishedPulling="2025-01-13 20:41:47.509742802 +0000 UTC m=+77.428066221" observedRunningTime="2025-01-13 20:41:48.923046745 +0000 UTC m=+78.841370154" watchObservedRunningTime="2025-01-13 20:41:48.923266277 +0000 UTC m=+78.841589676" Jan 13 20:41:49.144071 systemd-networkd[1411]: lxc_health: Link UP Jan 13 20:41:49.153334 systemd-networkd[1411]: lxc_health: Gained carrier Jan 13 20:41:49.648272 kubelet[1814]: E0113 20:41:49.648178 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:49.854542 kubelet[1814]: E0113 20:41:49.854496 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:50.596171 kubelet[1814]: E0113 20:41:50.596111 1814 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:50.649019 kubelet[1814]: E0113 20:41:50.648937 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:50.801798 kubelet[1814]: E0113 20:41:50.801752 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:50.860120 kubelet[1814]: E0113 20:41:50.859963 1814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:41:50.981471 systemd-networkd[1411]: lxc_health: Gained IPv6LL Jan 13 20:41:51.649658 kubelet[1814]: E0113 20:41:51.649591 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:52.650110 kubelet[1814]: E0113 20:41:52.650062 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:53.651161 kubelet[1814]: E0113 20:41:53.651100 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:54.651338 kubelet[1814]: E0113 20:41:54.651265 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:55.651981 kubelet[1814]: E0113 20:41:55.651910 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:56.652824 kubelet[1814]: E0113 20:41:56.652788 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:57.653501 kubelet[1814]: E0113 20:41:57.653439 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:58.654669 kubelet[1814]: E0113 20:41:58.654607 1814 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"