Jan 29 12:02:39.899382 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:02:39.899409 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:02:39.899424 kernel: BIOS-provided physical RAM map: Jan 29 12:02:39.899432 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:02:39.899440 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:02:39.899448 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:02:39.899458 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 12:02:39.899467 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 12:02:39.899475 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 12:02:39.899489 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 12:02:39.899498 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:02:39.899508 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:02:39.899517 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:02:39.899525 kernel: NX (Execute Disable) protection: active Jan 29 12:02:39.899536 kernel: APIC: Static calls initialized Jan 29 12:02:39.899548 kernel: SMBIOS 2.8 present. Jan 29 12:02:39.899558 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 12:02:39.899567 kernel: Hypervisor detected: KVM Jan 29 12:02:39.899576 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:02:39.899585 kernel: kvm-clock: using sched offset of 2202784832 cycles Jan 29 12:02:39.899595 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:02:39.899605 kernel: tsc: Detected 2794.750 MHz processor Jan 29 12:02:39.899614 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:02:39.899624 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:02:39.899637 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 12:02:39.899665 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:02:39.899685 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:02:39.899704 kernel: Using GB pages for direct mapping Jan 29 12:02:39.899722 kernel: ACPI: Early table checksum verification disabled Jan 29 12:02:39.899738 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 12:02:39.899757 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:02:39.899776 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:02:39.899792 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:02:39.899814 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 12:02:39.899833 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:02:39.899842 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:02:39.899852 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:02:39.899861 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:02:39.899870 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 12:02:39.899880 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 12:02:39.899895 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 12:02:39.899907 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 12:02:39.899917 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 12:02:39.899927 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 12:02:39.899937 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 12:02:39.899946 kernel: No NUMA configuration found Jan 29 12:02:39.899956 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 12:02:39.899969 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 12:02:39.899998 kernel: Zone ranges: Jan 29 12:02:39.900009 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:02:39.900019 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 12:02:39.900029 kernel: Normal empty Jan 29 12:02:39.900038 kernel: Movable zone start for each node Jan 29 12:02:39.900048 kernel: Early memory node ranges Jan 29 12:02:39.900058 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:02:39.900067 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 12:02:39.900077 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 12:02:39.900091 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:02:39.900101 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:02:39.900110 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 12:02:39.900120 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:02:39.900130 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:02:39.900139 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:02:39.900159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:02:39.900171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:02:39.900189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:02:39.900203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:02:39.900212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:02:39.900222 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:02:39.900232 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:02:39.900242 kernel: TSC deadline timer available Jan 29 12:02:39.900252 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 12:02:39.900262 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:02:39.900272 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 12:02:39.900281 kernel: kvm-guest: setup PV sched yield Jan 29 12:02:39.900294 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 12:02:39.900304 kernel: Booting paravirtualized kernel on KVM Jan 29 12:02:39.900314 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:02:39.900324 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 12:02:39.900334 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 12:02:39.900344 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 12:02:39.900354 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 12:02:39.900364 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:02:39.900374 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:02:39.900388 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:02:39.900399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:02:39.900409 kernel: random: crng init done Jan 29 12:02:39.900419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:02:39.900429 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:02:39.900439 kernel: Fallback order for Node 0: 0 Jan 29 12:02:39.900448 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 12:02:39.900458 kernel: Policy zone: DMA32 Jan 29 12:02:39.900468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:02:39.900482 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 29 12:02:39.900492 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 12:02:39.900502 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:02:39.900512 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:02:39.900522 kernel: Dynamic Preempt: voluntary Jan 29 12:02:39.900532 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:02:39.900543 kernel: rcu: RCU event tracing is enabled. Jan 29 12:02:39.900553 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 12:02:39.900563 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:02:39.900576 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:02:39.900586 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:02:39.900596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:02:39.900606 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 12:02:39.900616 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 12:02:39.900626 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:02:39.900635 kernel: Console: colour VGA+ 80x25 Jan 29 12:02:39.900656 kernel: printk: console [ttyS0] enabled Jan 29 12:02:39.900667 kernel: ACPI: Core revision 20230628 Jan 29 12:02:39.900681 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 12:02:39.900691 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:02:39.900700 kernel: x2apic enabled Jan 29 12:02:39.900710 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:02:39.900720 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 12:02:39.900730 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 12:02:39.900740 kernel: kvm-guest: setup PV IPIs Jan 29 12:02:39.900762 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:02:39.900772 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:02:39.900783 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 12:02:39.900793 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 12:02:39.900807 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 12:02:39.900817 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 12:02:39.900827 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:02:39.900838 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:02:39.900849 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:02:39.900862 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:02:39.900872 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 12:02:39.900882 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 12:02:39.900893 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:02:39.900903 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:02:39.900914 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 12:02:39.900925 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 12:02:39.900935 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 12:02:39.900949 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:02:39.900959 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:02:39.900970 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:02:39.900996 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:02:39.901007 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 12:02:39.901017 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:02:39.901027 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:02:39.901038 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:02:39.901048 kernel: landlock: Up and running. Jan 29 12:02:39.901062 kernel: SELinux: Initializing. Jan 29 12:02:39.901073 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:02:39.901083 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:02:39.901094 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 12:02:39.901105 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:02:39.901115 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:02:39.901126 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:02:39.901136 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 12:02:39.901146 kernel: ... version: 0 Jan 29 12:02:39.901160 kernel: ... bit width: 48 Jan 29 12:02:39.901170 kernel: ... generic registers: 6 Jan 29 12:02:39.901181 kernel: ... value mask: 0000ffffffffffff Jan 29 12:02:39.901191 kernel: ... max period: 00007fffffffffff Jan 29 12:02:39.901201 kernel: ... fixed-purpose events: 0 Jan 29 12:02:39.901211 kernel: ... event mask: 000000000000003f Jan 29 12:02:39.901221 kernel: signal: max sigframe size: 1776 Jan 29 12:02:39.901232 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:02:39.901243 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:02:39.901256 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:02:39.901267 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:02:39.901277 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 12:02:39.901288 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 12:02:39.901298 kernel: smpboot: Max logical packages: 1 Jan 29 12:02:39.901308 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 12:02:39.901319 kernel: devtmpfs: initialized Jan 29 12:02:39.901329 kernel: x86/mm: Memory block size: 128MB Jan 29 12:02:39.901340 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:02:39.901350 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 12:02:39.901364 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:02:39.901374 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:02:39.901384 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:02:39.901394 kernel: audit: type=2000 audit(1738152159.400:1): state=initialized audit_enabled=0 res=1 Jan 29 12:02:39.901405 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:02:39.901415 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:02:39.901425 kernel: cpuidle: using governor menu Jan 29 12:02:39.901435 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:02:39.901446 kernel: dca service started, version 1.12.1 Jan 29 12:02:39.901459 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 12:02:39.901470 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 12:02:39.901480 kernel: PCI: Using configuration type 1 for base access Jan 29 12:02:39.901491 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:02:39.901501 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:02:39.901511 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:02:39.901521 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:02:39.901531 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:02:39.901545 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:02:39.901555 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:02:39.901565 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:02:39.901575 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:02:39.901585 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:02:39.901596 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:02:39.901606 kernel: ACPI: Interpreter enabled Jan 29 12:02:39.901616 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:02:39.901627 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:02:39.901637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:02:39.901660 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:02:39.901681 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 12:02:39.901714 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:02:39.902013 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:02:39.902181 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 12:02:39.902334 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 12:02:39.902349 kernel: PCI host bridge to bus 0000:00 Jan 29 12:02:39.902511 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:02:39.902659 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:02:39.902804 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:02:39.902946 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 12:02:39.903171 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:02:39.903318 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 12:02:39.903462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:02:39.903663 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 12:02:39.903844 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 12:02:39.904030 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 12:02:39.904189 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 12:02:39.904342 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 12:02:39.904495 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:02:39.904670 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:02:39.904819 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 12:02:39.904967 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 12:02:39.905137 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 12:02:39.905330 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:02:39.905486 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:02:39.905616 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 12:02:39.905753 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 12:02:39.905883 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:02:39.906026 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 12:02:39.906150 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 12:02:39.906270 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 12:02:39.906389 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 12:02:39.906517 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 12:02:39.906642 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 12:02:39.906781 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 12:02:39.906901 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 12:02:39.907090 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 12:02:39.907233 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 12:02:39.907354 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 12:02:39.907368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:02:39.907377 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:02:39.907384 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:02:39.907392 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:02:39.907400 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 12:02:39.907407 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 12:02:39.907415 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 12:02:39.907422 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 12:02:39.907430 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 12:02:39.907440 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 12:02:39.907447 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 12:02:39.907455 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 12:02:39.907462 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 12:02:39.907470 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 12:02:39.907478 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 12:02:39.907485 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 12:02:39.907493 kernel: iommu: Default domain type: Translated Jan 29 12:02:39.907500 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:02:39.907510 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:02:39.907518 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:02:39.907526 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:02:39.907533 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 12:02:39.907662 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 12:02:39.907783 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 12:02:39.907902 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:02:39.907912 kernel: vgaarb: loaded Jan 29 12:02:39.907920 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 12:02:39.907931 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 12:02:39.907939 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:02:39.907946 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:02:39.907954 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:02:39.907962 kernel: pnp: PnP ACPI init Jan 29 12:02:39.908122 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 12:02:39.908134 kernel: pnp: PnP ACPI: found 6 devices Jan 29 12:02:39.908142 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:02:39.908153 kernel: NET: Registered PF_INET protocol family Jan 29 12:02:39.908161 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:02:39.908169 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:02:39.908176 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:02:39.908184 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:02:39.908191 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:02:39.908199 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:02:39.908206 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:02:39.908216 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:02:39.908224 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:02:39.908232 kernel: NET: Registered PF_XDP protocol family Jan 29 12:02:39.908344 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:02:39.908454 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:02:39.908563 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:02:39.908682 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 12:02:39.908791 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 12:02:39.908900 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 12:02:39.908913 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:02:39.908921 kernel: Initialise system trusted keyrings Jan 29 12:02:39.908929 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:02:39.908936 kernel: Key type asymmetric registered Jan 29 12:02:39.908944 kernel: Asymmetric key parser 'x509' registered Jan 29 12:02:39.908952 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:02:39.908959 kernel: io scheduler mq-deadline registered Jan 29 12:02:39.908967 kernel: io scheduler kyber registered Jan 29 12:02:39.909038 kernel: io scheduler bfq registered Jan 29 12:02:39.909050 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:02:39.909059 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 12:02:39.909066 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 12:02:39.909074 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 12:02:39.909082 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:02:39.909089 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:02:39.909097 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:02:39.909105 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:02:39.909113 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:02:39.909243 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:02:39.909254 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:02:39.909364 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:02:39.909475 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:02:39 UTC (1738152159) Jan 29 12:02:39.909591 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 12:02:39.909600 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:02:39.909608 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:02:39.909616 kernel: Segment Routing with IPv6 Jan 29 12:02:39.909627 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:02:39.909635 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:02:39.909643 kernel: Key type dns_resolver registered Jan 29 12:02:39.909658 kernel: IPI shorthand broadcast: enabled Jan 29 12:02:39.909666 kernel: sched_clock: Marking stable (616002449, 114679534)->(752182772, -21500789) Jan 29 12:02:39.909674 kernel: registered taskstats version 1 Jan 29 12:02:39.909682 kernel: Loading compiled-in X.509 certificates Jan 29 12:02:39.909689 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:02:39.909697 kernel: Key type .fscrypt registered Jan 29 12:02:39.909707 kernel: Key type fscrypt-provisioning registered Jan 29 12:02:39.909714 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:02:39.909722 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:02:39.909730 kernel: ima: No architecture policies found Jan 29 12:02:39.909738 kernel: clk: Disabling unused clocks Jan 29 12:02:39.909745 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:02:39.909753 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:02:39.909761 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:02:39.909768 kernel: Run /init as init process Jan 29 12:02:39.909778 kernel: with arguments: Jan 29 12:02:39.909786 kernel: /init Jan 29 12:02:39.909793 kernel: with environment: Jan 29 12:02:39.909801 kernel: HOME=/ Jan 29 12:02:39.909808 kernel: TERM=linux Jan 29 12:02:39.909816 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:02:39.909826 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:02:39.909836 systemd[1]: Detected virtualization kvm. Jan 29 12:02:39.909846 systemd[1]: Detected architecture x86-64. Jan 29 12:02:39.909854 systemd[1]: Running in initrd. Jan 29 12:02:39.909862 systemd[1]: No hostname configured, using default hostname. Jan 29 12:02:39.909870 systemd[1]: Hostname set to . Jan 29 12:02:39.909878 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:02:39.909886 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:02:39.909895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:02:39.909903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:02:39.909914 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:02:39.909934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:02:39.909946 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:02:39.909955 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:02:39.909965 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:02:39.909989 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:02:39.910001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:02:39.910010 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:02:39.910018 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:02:39.910026 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:02:39.910034 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:02:39.910042 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:02:39.910050 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:02:39.910062 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:02:39.910070 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:02:39.910079 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:02:39.910087 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:02:39.910095 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:02:39.910104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:02:39.910112 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:02:39.910120 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:02:39.910130 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:02:39.910139 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:02:39.910147 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:02:39.910155 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:02:39.910163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:02:39.910171 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:02:39.910180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:02:39.910188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:02:39.910196 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:02:39.910207 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:02:39.910234 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 12:02:39.910257 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:02:39.910265 systemd-journald[193]: Journal started Jan 29 12:02:39.910286 systemd-journald[193]: Runtime Journal (/run/log/journal/53161d50f747434298efb680c8f8c0aa) is 6.0M, max 48.4M, 42.3M free. Jan 29 12:02:39.900222 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 12:02:39.934381 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:02:39.935049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:39.939109 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:02:39.941136 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 12:02:39.942143 kernel: Bridge firewalling registered Jan 29 12:02:39.942329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:02:39.958104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:02:39.961230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:02:39.964377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:02:39.968070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:02:39.977693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:02:39.979412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:02:39.983273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:02:39.984073 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:02:39.994094 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:02:39.996511 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:02:40.007484 dracut-cmdline[228]: dracut-dracut-053 Jan 29 12:02:40.010617 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:02:40.043827 systemd-resolved[230]: Positive Trust Anchors: Jan 29 12:02:40.043846 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:02:40.043890 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:02:40.046959 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 29 12:02:40.053485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:02:40.056746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:02:40.104019 kernel: SCSI subsystem initialized Jan 29 12:02:40.113003 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:02:40.122998 kernel: iscsi: registered transport (tcp) Jan 29 12:02:40.144205 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:02:40.144253 kernel: QLogic iSCSI HBA Driver Jan 29 12:02:40.191298 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:02:40.202177 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:02:40.227876 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:02:40.227967 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:02:40.227999 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:02:40.271008 kernel: raid6: avx2x4 gen() 26310 MB/s Jan 29 12:02:40.288002 kernel: raid6: avx2x2 gen() 26009 MB/s Jan 29 12:02:40.305233 kernel: raid6: avx2x1 gen() 21701 MB/s Jan 29 12:02:40.305258 kernel: raid6: using algorithm avx2x4 gen() 26310 MB/s Jan 29 12:02:40.323268 kernel: raid6: .... xor() 6390 MB/s, rmw enabled Jan 29 12:02:40.323311 kernel: raid6: using avx2x2 recovery algorithm Jan 29 12:02:40.344002 kernel: xor: automatically using best checksumming function avx Jan 29 12:02:40.507013 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:02:40.520561 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:02:40.529250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:02:40.542396 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 12:02:40.547318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:02:40.557123 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:02:40.570480 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 29 12:02:40.604237 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:02:40.612158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:02:40.677948 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:02:40.685131 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:02:40.696101 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:02:40.699795 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:02:40.702342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:02:40.703580 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:02:40.715017 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 12:02:40.742878 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:02:40.742895 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 12:02:40.743060 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:02:40.743072 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:02:40.743082 kernel: GPT:9289727 != 19775487 Jan 29 12:02:40.743099 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:02:40.743109 kernel: GPT:9289727 != 19775487 Jan 29 12:02:40.743119 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:02:40.743128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:02:40.743139 kernel: libata version 3.00 loaded. Jan 29 12:02:40.743149 kernel: AES CTR mode by8 optimization enabled Jan 29 12:02:40.715165 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:02:40.726119 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:02:40.746880 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:02:40.746969 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:02:40.749232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:02:40.749563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:02:40.749611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:40.749930 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:02:40.765504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:02:40.773502 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 12:02:40.783395 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) Jan 29 12:02:40.783411 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 12:02:40.783429 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 12:02:40.783578 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 12:02:40.783730 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (466) Jan 29 12:02:40.783742 kernel: scsi host0: ahci Jan 29 12:02:40.783891 kernel: scsi host1: ahci Jan 29 12:02:40.784095 kernel: scsi host2: ahci Jan 29 12:02:40.784243 kernel: scsi host3: ahci Jan 29 12:02:40.784384 kernel: scsi host4: ahci Jan 29 12:02:40.784525 kernel: scsi host5: ahci Jan 29 12:02:40.784677 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 12:02:40.784688 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 12:02:40.784698 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 12:02:40.784712 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 12:02:40.784722 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 12:02:40.784735 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 12:02:40.782148 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:02:40.825733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:40.841720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:02:40.849192 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:02:40.853884 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:02:40.856423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:02:40.867138 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:02:40.868372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:02:40.878162 disk-uuid[566]: Primary Header is updated. Jan 29 12:02:40.878162 disk-uuid[566]: Secondary Entries is updated. Jan 29 12:02:40.878162 disk-uuid[566]: Secondary Header is updated. Jan 29 12:02:40.882015 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:02:40.886473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:02:40.889578 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:02:41.093786 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 12:02:41.093873 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 12:02:41.093901 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 12:02:41.093912 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 12:02:41.095010 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 12:02:41.096007 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 12:02:41.096069 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 12:02:41.097212 kernel: ata3.00: applying bridge limits Jan 29 12:02:41.098005 kernel: ata3.00: configured for UDMA/100 Jan 29 12:02:41.098072 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 12:02:41.137546 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 12:02:41.149516 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:02:41.149528 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 12:02:41.889011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:02:41.889432 disk-uuid[572]: The operation has completed successfully. Jan 29 12:02:41.917688 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:02:41.917841 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:02:41.945391 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:02:41.948931 sh[591]: Success Jan 29 12:02:41.963016 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 12:02:41.994410 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:02:42.015531 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:02:42.018734 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:02:42.033095 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:02:42.033126 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:02:42.033141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:02:42.034111 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:02:42.035457 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:02:42.040227 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:02:42.040667 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:02:42.058139 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:02:42.059807 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:02:42.067565 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:02:42.067639 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:02:42.067655 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:02:42.070991 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:02:42.079560 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:02:42.081621 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:02:42.090879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:02:42.100162 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:02:42.152273 ignition[681]: Ignition 2.19.0 Jan 29 12:02:42.152285 ignition[681]: Stage: fetch-offline Jan 29 12:02:42.152318 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:02:42.152327 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:02:42.152411 ignition[681]: parsed url from cmdline: "" Jan 29 12:02:42.152416 ignition[681]: no config URL provided Jan 29 12:02:42.152421 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:02:42.152431 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:02:42.152456 ignition[681]: op(1): [started] loading QEMU firmware config module Jan 29 12:02:42.152462 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 12:02:42.161684 ignition[681]: op(1): [finished] loading QEMU firmware config module Jan 29 12:02:42.161702 ignition[681]: QEMU firmware config was not found. Ignoring... Jan 29 12:02:42.164429 ignition[681]: parsing config with SHA512: 84840651f96b684883e56f0ed54c744b173c5abdd25c12856f6fa2bee64990b1d88dbeeed0a4006fe34c9b1afeec7b91f6401df10e17818f09ac8d6cc361cf55 Jan 29 12:02:42.167040 unknown[681]: fetched base config from "system" Jan 29 12:02:42.167057 unknown[681]: fetched user config from "qemu" Jan 29 12:02:42.167389 ignition[681]: fetch-offline: fetch-offline passed Jan 29 12:02:42.167464 ignition[681]: Ignition finished successfully Jan 29 12:02:42.169943 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:02:42.171757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:02:42.189214 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:02:42.210288 systemd-networkd[782]: lo: Link UP Jan 29 12:02:42.210300 systemd-networkd[782]: lo: Gained carrier Jan 29 12:02:42.211852 systemd-networkd[782]: Enumeration completed Jan 29 12:02:42.212015 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:02:42.212305 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:02:42.212309 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:02:42.213295 systemd-networkd[782]: eth0: Link UP Jan 29 12:02:42.213299 systemd-networkd[782]: eth0: Gained carrier Jan 29 12:02:42.213310 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:02:42.214298 systemd[1]: Reached target network.target - Network. Jan 29 12:02:42.216239 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:02:42.227145 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:02:42.234047 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:02:42.245535 ignition[785]: Ignition 2.19.0 Jan 29 12:02:42.245551 ignition[785]: Stage: kargs Jan 29 12:02:42.245790 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:02:42.245806 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:02:42.250231 ignition[785]: kargs: kargs passed Jan 29 12:02:42.250292 ignition[785]: Ignition finished successfully Jan 29 12:02:42.254984 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:02:42.267122 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:02:42.282617 ignition[794]: Ignition 2.19.0 Jan 29 12:02:42.282633 ignition[794]: Stage: disks Jan 29 12:02:42.282846 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:02:42.282862 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:02:42.283829 ignition[794]: disks: disks passed Jan 29 12:02:42.286399 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:02:42.283886 ignition[794]: Ignition finished successfully Jan 29 12:02:42.288297 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:02:42.290218 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:02:42.292904 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:02:42.294118 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:02:42.295341 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:02:42.308163 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:02:42.322777 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:02:42.422444 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:02:42.435077 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:02:42.528020 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:02:42.528796 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:02:42.530262 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:02:42.545055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:02:42.546963 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:02:42.547683 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:02:42.547719 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:02:42.559299 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 29 12:02:42.559319 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:02:42.559331 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:02:42.559341 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:02:42.547739 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:02:42.554357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:02:42.560118 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:02:42.564481 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:02:42.566796 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:02:42.595987 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:02:42.599746 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:02:42.603278 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:02:42.607887 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:02:42.684113 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:02:42.696066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:02:42.699315 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:02:42.703998 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:02:42.727359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:02:42.738489 ignition[927]: INFO : Ignition 2.19.0 Jan 29 12:02:42.738489 ignition[927]: INFO : Stage: mount Jan 29 12:02:42.740217 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:02:42.740217 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:02:42.740217 ignition[927]: INFO : mount: mount passed Jan 29 12:02:42.740217 ignition[927]: INFO : Ignition finished successfully Jan 29 12:02:42.745824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:02:42.757049 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:02:43.033023 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:02:43.046174 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:02:43.053439 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Jan 29 12:02:43.053470 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:02:43.053487 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:02:43.054996 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:02:43.058002 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:02:43.059163 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:02:43.083536 ignition[955]: INFO : Ignition 2.19.0 Jan 29 12:02:43.083536 ignition[955]: INFO : Stage: files Jan 29 12:02:43.085285 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:02:43.085285 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:02:43.087756 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:02:43.089357 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:02:43.089357 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:02:43.093300 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:02:43.094732 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:02:43.096298 unknown[955]: wrote ssh authorized keys file for user: core Jan 29 12:02:43.097426 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:02:43.099440 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:02:43.101145 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:02:43.102795 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:02:43.104531 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:02:43.106359 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:02:43.108180 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:02:43.109935 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:02:43.112446 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:02:43.114858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:02:43.116947 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:02:43.679339 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 29 12:02:43.896087 systemd-networkd[782]: eth0: Gained IPv6LL Jan 29 12:02:44.059860 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:02:44.059860 ignition[955]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Jan 29 12:02:44.086935 ignition[955]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 12:02:44.111649 ignition[955]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:02:44.116406 ignition[955]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:02:44.117997 ignition[955]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 12:02:44.117997 ignition[955]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:02:44.117997 ignition[955]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:02:44.117997 ignition[955]: INFO : files: files passed Jan 29 12:02:44.117997 ignition[955]: INFO : Ignition finished successfully Jan 29 12:02:44.119487 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:02:44.127102 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:02:44.128826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:02:44.130874 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:02:44.130989 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:02:44.138155 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 12:02:44.140857 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:02:44.140857 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:02:44.149265 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:02:44.152283 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:02:44.154908 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:02:44.166124 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:02:44.187918 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:02:44.189483 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:02:44.192306 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:02:44.194542 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:02:44.196817 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:02:44.199194 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:02:44.214961 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:02:44.227117 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:02:44.272964 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:02:44.303455 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:02:44.303802 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:02:44.304306 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:02:44.304420 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:02:44.310991 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:02:44.311501 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:02:44.311887 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:02:44.315934 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:02:44.317951 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:02:44.320402 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:02:44.322338 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:02:44.324261 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:02:44.326582 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:02:44.328457 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:02:44.330286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:02:44.330395 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:02:44.333336 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:02:44.333883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:02:44.334353 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:02:44.334508 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:02:44.338774 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:02:44.338882 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:02:44.342793 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:02:44.342901 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:02:44.344865 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:02:44.346615 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:02:44.348015 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:02:44.349309 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:02:44.351264 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:02:44.352999 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:02:44.353087 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:02:44.355273 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:02:44.355389 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:02:44.357023 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:02:44.357150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:02:44.358795 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:02:44.358916 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:02:44.371121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:02:44.372057 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:02:44.373428 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:02:44.373655 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:02:44.375508 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:02:44.375653 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:02:44.383436 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:02:44.383595 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:02:44.396458 ignition[1009]: INFO : Ignition 2.19.0 Jan 29 12:02:44.396458 ignition[1009]: INFO : Stage: umount Jan 29 12:02:44.423005 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:02:44.423005 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:02:44.425680 ignition[1009]: INFO : umount: umount passed Jan 29 12:02:44.427013 ignition[1009]: INFO : Ignition finished successfully Jan 29 12:02:44.425857 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:02:44.430775 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:02:44.430930 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:02:44.431695 systemd[1]: Stopped target network.target - Network. Jan 29 12:02:44.434008 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:02:44.434070 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:02:44.434558 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:02:44.434609 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:02:44.434908 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:02:44.434955 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:02:44.439290 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:02:44.439342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:02:44.439759 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:02:44.440052 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:02:44.447354 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:02:44.447502 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:02:44.450237 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:02:44.450301 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:02:44.456493 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 29 12:02:44.481161 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:02:44.481336 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:02:44.481967 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:02:44.482034 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:02:44.489224 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:02:44.491157 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:02:44.492298 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:02:44.495094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:02:44.496218 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:02:44.498742 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:02:44.500054 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:02:44.502740 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:02:44.514711 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:02:44.514919 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:02:44.518618 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:02:44.518666 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:02:44.521469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:02:44.521511 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:02:44.523355 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:02:44.523411 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:02:44.527201 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:02:44.527263 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:02:44.527951 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:02:44.528016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:02:44.535119 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:02:44.535631 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:02:44.535680 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:02:44.538828 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:02:44.538876 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:02:44.541018 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:02:44.541065 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:02:44.554428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:02:44.554476 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:44.556931 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:02:44.557051 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:02:44.584245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:02:44.584369 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:02:44.700342 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:02:44.700477 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:02:44.702554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:02:44.704263 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:02:44.704317 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:02:44.717132 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:02:44.723142 systemd[1]: Switching root. Jan 29 12:02:44.758244 systemd-journald[193]: Journal stopped Jan 29 12:02:45.877653 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 12:02:45.877714 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:02:45.877731 kernel: SELinux: policy capability open_perms=1 Jan 29 12:02:45.877757 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:02:45.877768 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:02:45.877779 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:02:45.877792 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:02:45.877809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:02:45.877821 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:02:45.877831 kernel: audit: type=1403 audit(1738152165.178:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:02:45.877844 systemd[1]: Successfully loaded SELinux policy in 55.619ms. Jan 29 12:02:45.877865 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.704ms. Jan 29 12:02:45.877882 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:02:45.877896 systemd[1]: Detected virtualization kvm. Jan 29 12:02:45.877908 systemd[1]: Detected architecture x86-64. Jan 29 12:02:45.877920 systemd[1]: Detected first boot. Jan 29 12:02:45.877932 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:02:45.877944 zram_generator::config[1071]: No configuration found. Jan 29 12:02:45.877961 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:02:45.878016 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:02:45.878032 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:02:45.878045 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:02:45.878058 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:02:45.878070 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:02:45.878082 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:02:45.878094 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:02:45.878106 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:02:45.878120 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:02:45.878131 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:02:45.878146 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:02:45.878158 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:02:45.878170 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:02:45.878182 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:02:45.878194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:02:45.878206 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:02:45.878218 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:02:45.878230 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:02:45.878241 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:02:45.878255 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:02:45.878282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:02:45.878305 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:02:45.878327 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:02:45.878352 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:02:45.878376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:02:45.878400 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:02:45.878425 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:02:45.878452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:02:45.878477 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:02:45.878502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:02:45.878529 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:02:45.878541 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:02:45.878553 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:02:45.878564 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:02:45.878576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:02:45.878588 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:02:45.878603 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:02:45.878614 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:02:45.878626 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:02:45.878638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:02:45.878649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:02:45.878661 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:02:45.878674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:02:45.878687 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:02:45.878699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:02:45.878714 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:02:45.878726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:02:45.878738 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:02:45.878751 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:02:45.878766 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:02:45.878779 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:02:45.878790 kernel: loop: module loaded Jan 29 12:02:45.878802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:02:45.878815 kernel: fuse: init (API version 7.39) Jan 29 12:02:45.878827 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:02:45.878839 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:02:45.878868 systemd-journald[1155]: Collecting audit messages is disabled. Jan 29 12:02:45.878889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:02:45.878901 systemd-journald[1155]: Journal started Jan 29 12:02:45.878925 systemd-journald[1155]: Runtime Journal (/run/log/journal/53161d50f747434298efb680c8f8c0aa) is 6.0M, max 48.4M, 42.3M free. Jan 29 12:02:45.880740 kernel: ACPI: bus type drm_connector registered Jan 29 12:02:45.883004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:02:45.888039 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:02:45.892437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:02:45.893757 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:02:45.894999 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:02:45.896109 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:02:45.897324 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:02:45.898561 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:02:45.899955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:02:45.901621 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:02:45.901840 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:02:45.903364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:02:45.903592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:02:45.905233 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:02:45.905445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:02:45.906881 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:02:45.907102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:02:45.908677 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:02:45.908888 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:02:45.910328 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:02:45.910557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:02:45.912097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:02:45.913614 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:02:45.915275 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:02:45.929088 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:02:45.938069 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:02:45.940293 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:02:45.941453 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:02:45.967102 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:02:45.969338 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:02:45.970555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:02:45.973108 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:02:45.974423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:02:45.977144 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:02:45.978358 systemd-journald[1155]: Time spent on flushing to /var/log/journal/53161d50f747434298efb680c8f8c0aa is 16.023ms for 924 entries. Jan 29 12:02:45.978358 systemd-journald[1155]: System Journal (/var/log/journal/53161d50f747434298efb680c8f8c0aa) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:02:46.076024 systemd-journald[1155]: Received client request to flush runtime journal. Jan 29 12:02:45.982821 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:02:45.989915 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:02:45.991493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:02:45.993103 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:02:45.994364 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:02:46.000009 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:02:46.019492 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:02:46.023263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:02:46.026566 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 29 12:02:46.026580 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 29 12:02:46.032366 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:02:46.044107 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:02:46.059323 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:02:46.063214 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:02:46.068565 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:02:46.082153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:02:46.084686 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:02:46.097851 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 29 12:02:46.097874 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 29 12:02:46.103558 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:02:46.648851 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:02:46.655108 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:02:46.690936 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Jan 29 12:02:46.710360 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:02:46.722173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:02:46.734111 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:02:46.748001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1244) Jan 29 12:02:46.747836 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 12:02:46.807047 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:02:46.807417 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:02:46.815999 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:02:46.821801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:02:46.838713 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 12:02:46.839232 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 12:02:46.839405 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 12:02:46.866035 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 12:02:46.877481 systemd-networkd[1240]: lo: Link UP Jan 29 12:02:46.877832 systemd-networkd[1240]: lo: Gained carrier Jan 29 12:02:46.880609 systemd-networkd[1240]: Enumeration completed Jan 29 12:02:46.880723 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:02:46.883771 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:02:46.883779 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:02:46.885795 systemd-networkd[1240]: eth0: Link UP Jan 29 12:02:46.885862 systemd-networkd[1240]: eth0: Gained carrier Jan 29 12:02:46.885908 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:02:46.934468 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:02:46.933338 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:02:46.942034 systemd-networkd[1240]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:02:46.942891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:02:46.948342 kernel: kvm_amd: TSC scaling supported Jan 29 12:02:46.948374 kernel: kvm_amd: Nested Virtualization enabled Jan 29 12:02:46.948386 kernel: kvm_amd: Nested Paging enabled Jan 29 12:02:46.950517 kernel: kvm_amd: LBR virtualization supported Jan 29 12:02:46.950577 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 12:02:46.950590 kernel: kvm_amd: Virtual GIF supported Jan 29 12:02:46.972059 kernel: EDAC MC: Ver: 3.0.0 Jan 29 12:02:47.008526 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:02:47.017305 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:02:47.061394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:47.070358 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:02:47.103785 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:02:47.105829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:02:47.124160 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:02:47.129021 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:02:47.166681 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:02:47.168596 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:02:47.170089 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:02:47.170125 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:02:47.171373 systemd[1]: Reached target machines.target - Containers. Jan 29 12:02:47.174280 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:02:47.189166 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:02:47.192332 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:02:47.193771 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:02:47.195138 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:02:47.198153 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:02:47.202850 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:02:47.204193 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:02:47.212273 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 12:02:47.223908 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:02:47.230341 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:02:47.231157 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:02:47.242992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:02:47.267997 kernel: loop1: detected capacity change from 0 to 210664 Jan 29 12:02:47.297008 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 12:02:47.333001 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 12:02:47.344014 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 12:02:47.353004 kernel: loop5: detected capacity change from 0 to 140768 Jan 29 12:02:47.361089 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 12:02:47.361707 (sd-merge)[1307]: Merged extensions into '/usr'. Jan 29 12:02:47.365717 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:02:47.365734 systemd[1]: Reloading... Jan 29 12:02:47.423060 zram_generator::config[1341]: No configuration found. Jan 29 12:02:47.468271 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:02:47.555307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:02:47.620572 systemd[1]: Reloading finished in 254 ms. Jan 29 12:02:47.641031 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:02:47.642621 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:02:47.657106 systemd[1]: Starting ensure-sysext.service... Jan 29 12:02:47.659126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:02:47.663321 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:02:47.663335 systemd[1]: Reloading... Jan 29 12:02:47.683600 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:02:47.684099 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:02:47.685095 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:02:47.685390 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 12:02:47.685481 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 12:02:47.692875 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:02:47.692886 systemd-tmpfiles[1380]: Skipping /boot Jan 29 12:02:47.710084 zram_generator::config[1408]: No configuration found. Jan 29 12:02:47.710967 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:02:47.711069 systemd-tmpfiles[1380]: Skipping /boot Jan 29 12:02:47.829853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:02:47.895169 systemd[1]: Reloading finished in 231 ms. Jan 29 12:02:47.912689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:02:47.926557 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:47.929166 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:02:47.932304 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:02:47.937116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:02:47.942127 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:02:47.955950 systemd[1]: Finished ensure-sysext.service. Jan 29 12:02:47.957391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:02:47.962366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:02:47.962682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:02:47.971314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:02:47.974571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:02:47.975455 augenrules[1479]: No rules Jan 29 12:02:47.978142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:02:47.986104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:02:47.987368 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:02:47.990962 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:02:47.992204 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:02:47.992933 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:47.994786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:02:47.996612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:02:47.996879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:02:47.998733 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:02:47.998944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:02:48.000394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:02:48.000685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:02:48.002699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:02:48.003279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:02:48.011399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:02:48.011683 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:02:48.017164 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:02:48.020337 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:02:48.022836 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:02:48.031029 systemd-resolved[1458]: Positive Trust Anchors: Jan 29 12:02:48.031042 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:02:48.031074 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:02:48.034501 systemd-resolved[1458]: Defaulting to hostname 'linux'. Jan 29 12:02:48.036799 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:02:48.038176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:02:48.039566 systemd[1]: Reached target network.target - Network. Jan 29 12:02:48.040497 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:02:48.079650 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:02:48.081255 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:02:48.610024 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:02:48.610043 systemd-resolved[1458]: Clock change detected. Flushing caches. Jan 29 12:02:48.611269 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 12:02:48.611320 systemd-timesyncd[1490]: Initial clock synchronization to Wed 2025-01-29 12:02:48.609986 UTC. Jan 29 12:02:48.611354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:02:48.612635 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:02:48.613957 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:02:48.613989 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:02:48.614930 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:02:48.616193 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:02:48.617531 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:02:48.618826 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:02:48.620472 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:02:48.623747 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:02:48.626256 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:02:48.629742 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:02:48.630877 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:02:48.631866 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:02:48.633006 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:02:48.633045 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:02:48.633067 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:02:48.634409 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:02:48.636679 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:02:48.638761 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:02:48.644585 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:02:48.645833 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:02:48.648665 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:02:48.652423 jq[1511]: false Jan 29 12:02:48.653541 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:02:48.657996 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:02:48.663909 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:02:48.666224 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:02:48.667925 dbus-daemon[1510]: [system] SELinux support is enabled Jan 29 12:02:48.670077 extend-filesystems[1513]: Found loop3 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found loop4 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found loop5 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found sr0 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda1 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda2 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda3 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found usr Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda4 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda6 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda7 Jan 29 12:02:48.674343 extend-filesystems[1513]: Found vda9 Jan 29 12:02:48.674343 extend-filesystems[1513]: Checking size of /dev/vda9 Jan 29 12:02:48.671595 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:02:48.676547 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:02:48.678534 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:02:48.697741 jq[1532]: true Jan 29 12:02:48.684846 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:02:48.708628 update_engine[1527]: I20250129 12:02:48.703706 1527 main.cc:92] Flatcar Update Engine starting Jan 29 12:02:48.708628 update_engine[1527]: I20250129 12:02:48.704944 1527 update_check_scheduler.cc:74] Next update check in 2m36s Jan 29 12:02:48.708931 extend-filesystems[1513]: Resized partition /dev/vda9 Jan 29 12:02:48.713451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1245) Jan 29 12:02:48.685188 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:02:48.713560 extend-filesystems[1539]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:02:48.685549 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:02:48.721185 jq[1538]: true Jan 29 12:02:48.685834 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:02:48.690650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:02:48.691042 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:02:48.724742 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 12:02:48.734866 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:02:48.751710 systemd-logind[1523]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:02:48.751736 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:02:48.751750 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:02:48.752396 systemd-logind[1523]: New seat seat0. Jan 29 12:02:48.753789 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:02:48.753820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:02:48.755461 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:02:48.755508 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:02:48.757858 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:02:48.763824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:02:48.765335 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:02:48.777421 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 12:02:48.796841 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:02:48.808138 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:02:48.808138 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:02:48.808138 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 12:02:48.812247 extend-filesystems[1513]: Resized filesystem in /dev/vda9 Jan 29 12:02:48.814117 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:02:48.814509 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:02:48.816199 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:02:48.817282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:02:48.820216 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:02:48.931833 containerd[1542]: time="2025-01-29T12:02:48.931666600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:02:48.961294 containerd[1542]: time="2025-01-29T12:02:48.961209643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:48.963540 containerd[1542]: time="2025-01-29T12:02:48.963483186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:48.963540 containerd[1542]: time="2025-01-29T12:02:48.963521207Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:02:48.963540 containerd[1542]: time="2025-01-29T12:02:48.963540223Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:02:48.963818 containerd[1542]: time="2025-01-29T12:02:48.963780023Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:02:48.963818 containerd[1542]: time="2025-01-29T12:02:48.963809909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:48.963943 containerd[1542]: time="2025-01-29T12:02:48.963912712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:48.963943 containerd[1542]: time="2025-01-29T12:02:48.963935805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:48.964322 containerd[1542]: time="2025-01-29T12:02:48.964280922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:48.964322 containerd[1542]: time="2025-01-29T12:02:48.964307842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:48.964390 containerd[1542]: time="2025-01-29T12:02:48.964333050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:48.964390 containerd[1542]: time="2025-01-29T12:02:48.964348449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:48.964513 containerd[1542]: time="2025-01-29T12:02:48.964487099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:48.964846 containerd[1542]: time="2025-01-29T12:02:48.964806547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:02:48.965083 containerd[1542]: time="2025-01-29T12:02:48.965039785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:02:48.965083 containerd[1542]: time="2025-01-29T12:02:48.965067797Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:02:48.965239 containerd[1542]: time="2025-01-29T12:02:48.965203772Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:02:48.965307 containerd[1542]: time="2025-01-29T12:02:48.965283522Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:02:48.970820 containerd[1542]: time="2025-01-29T12:02:48.970776661Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:02:48.970820 containerd[1542]: time="2025-01-29T12:02:48.970821365Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:02:48.970820 containerd[1542]: time="2025-01-29T12:02:48.970835852Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:02:48.971045 containerd[1542]: time="2025-01-29T12:02:48.970850991Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:02:48.971045 containerd[1542]: time="2025-01-29T12:02:48.970865057Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:02:48.971045 containerd[1542]: time="2025-01-29T12:02:48.971004108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:02:48.971450 containerd[1542]: time="2025-01-29T12:02:48.971389931Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:02:48.971688 containerd[1542]: time="2025-01-29T12:02:48.971655710Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:02:48.971688 containerd[1542]: time="2025-01-29T12:02:48.971680045Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:02:48.971746 containerd[1542]: time="2025-01-29T12:02:48.971693721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:02:48.971746 containerd[1542]: time="2025-01-29T12:02:48.971709160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971746 containerd[1542]: time="2025-01-29T12:02:48.971722044Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971746 containerd[1542]: time="2025-01-29T12:02:48.971734558Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971751299Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971767329Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971780193Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971792376Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971804268Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971826189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971839925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971851978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.971858 containerd[1542]: time="2025-01-29T12:02:48.971864942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971889207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971904867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971921157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971934382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971948208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971962595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971975710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.971987973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.972000967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.972016186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.972037365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.972052714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.972063584Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:02:48.972108 containerd[1542]: time="2025-01-29T12:02:48.972118027Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:02:48.972476 containerd[1542]: time="2025-01-29T12:02:48.972136261Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:02:48.972476 containerd[1542]: time="2025-01-29T12:02:48.972147432Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:02:48.972476 containerd[1542]: time="2025-01-29T12:02:48.972159124Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:02:48.972476 containerd[1542]: time="2025-01-29T12:02:48.972169794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972476 containerd[1542]: time="2025-01-29T12:02:48.972183940Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:02:48.972476 containerd[1542]: time="2025-01-29T12:02:48.972194259Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:02:48.972476 containerd[1542]: time="2025-01-29T12:02:48.972205480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:02:48.972662 containerd[1542]: time="2025-01-29T12:02:48.972517766Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:02:48.972662 containerd[1542]: time="2025-01-29T12:02:48.972581906Z" level=info msg="Connect containerd service" Jan 29 12:02:48.972662 containerd[1542]: time="2025-01-29T12:02:48.972622713Z" level=info msg="using legacy CRI server" Jan 29 12:02:48.972662 containerd[1542]: time="2025-01-29T12:02:48.972630407Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:02:48.973882 containerd[1542]: time="2025-01-29T12:02:48.973187822Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:02:48.974715 containerd[1542]: time="2025-01-29T12:02:48.974688166Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:02:48.974863 containerd[1542]: time="2025-01-29T12:02:48.974814713Z" level=info msg="Start subscribing containerd event" Jan 29 12:02:48.974918 containerd[1542]: time="2025-01-29T12:02:48.974900333Z" level=info msg="Start recovering state" Jan 29 12:02:48.975127 containerd[1542]: time="2025-01-29T12:02:48.975105889Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:02:48.975219 containerd[1542]: time="2025-01-29T12:02:48.975118132Z" level=info msg="Start event monitor" Jan 29 12:02:48.975287 containerd[1542]: time="2025-01-29T12:02:48.975229340Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:02:48.975287 containerd[1542]: time="2025-01-29T12:02:48.975239038Z" level=info msg="Start snapshots syncer" Jan 29 12:02:48.975287 containerd[1542]: time="2025-01-29T12:02:48.975253465Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:02:48.975287 containerd[1542]: time="2025-01-29T12:02:48.975265368Z" level=info msg="Start streaming server" Jan 29 12:02:48.975604 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:02:48.975956 containerd[1542]: time="2025-01-29T12:02:48.975912271Z" level=info msg="containerd successfully booted in 0.045470s" Jan 29 12:02:49.075806 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:02:49.102647 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:02:49.160682 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:02:49.169124 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:02:49.169552 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:02:49.172946 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:02:49.197237 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:02:49.208643 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:02:49.210913 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:02:49.212369 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:02:49.349660 systemd-networkd[1240]: eth0: Gained IPv6LL Jan 29 12:02:49.353072 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:02:49.355027 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:02:49.368702 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:02:49.371821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:49.374710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:02:49.398548 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:02:49.399016 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:02:49.401182 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:02:49.404788 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:02:50.004960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:50.006640 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:02:50.008839 systemd[1]: Startup finished in 6.218s (kernel) + 4.357s (userspace) = 10.576s. Jan 29 12:02:50.030916 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:50.472558 kubelet[1638]: E0129 12:02:50.472284 1638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:50.477009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:50.477291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:58.103946 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:02:58.115671 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:53990.service - OpenSSH per-connection server daemon (10.0.0.1:53990). Jan 29 12:02:58.157530 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 53990 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:58.159711 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:58.169034 systemd-logind[1523]: New session 1 of user core. Jan 29 12:02:58.170284 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:02:58.178695 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:02:58.190852 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:02:58.198763 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:02:58.202203 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:02:58.304853 systemd[1657]: Queued start job for default target default.target. Jan 29 12:02:58.305250 systemd[1657]: Created slice app.slice - User Application Slice. Jan 29 12:02:58.305268 systemd[1657]: Reached target paths.target - Paths. Jan 29 12:02:58.305280 systemd[1657]: Reached target timers.target - Timers. Jan 29 12:02:58.318534 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:02:58.325102 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:02:58.325203 systemd[1657]: Reached target sockets.target - Sockets. Jan 29 12:02:58.325218 systemd[1657]: Reached target basic.target - Basic System. Jan 29 12:02:58.325271 systemd[1657]: Reached target default.target - Main User Target. Jan 29 12:02:58.325318 systemd[1657]: Startup finished in 115ms. Jan 29 12:02:58.325715 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:02:58.327204 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:02:58.387737 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:53994.service - OpenSSH per-connection server daemon (10.0.0.1:53994). Jan 29 12:02:58.421807 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 53994 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:58.423515 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:58.427919 systemd-logind[1523]: New session 2 of user core. Jan 29 12:02:58.438730 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:02:58.493158 sshd[1670]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:58.505683 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:53998.service - OpenSSH per-connection server daemon (10.0.0.1:53998). Jan 29 12:02:58.506191 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:53994.service: Deactivated successfully. Jan 29 12:02:58.508686 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:02:58.509524 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:02:58.510429 systemd-logind[1523]: Removed session 2. Jan 29 12:02:58.539905 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 53998 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:58.541737 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:58.546250 systemd-logind[1523]: New session 3 of user core. Jan 29 12:02:58.556952 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:02:58.607932 sshd[1675]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:58.623773 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:54010.service - OpenSSH per-connection server daemon (10.0.0.1:54010). Jan 29 12:02:58.624283 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:53998.service: Deactivated successfully. Jan 29 12:02:58.626875 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:02:58.627834 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:02:58.628865 systemd-logind[1523]: Removed session 3. Jan 29 12:02:58.654654 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 54010 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:58.656064 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:58.660040 systemd-logind[1523]: New session 4 of user core. Jan 29 12:02:58.673698 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:02:58.726801 sshd[1683]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:58.742669 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Jan 29 12:02:58.743179 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:54010.service: Deactivated successfully. Jan 29 12:02:58.745673 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:02:58.746814 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:02:58.747841 systemd-logind[1523]: Removed session 4. Jan 29 12:02:58.774728 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:58.776113 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:58.780520 systemd-logind[1523]: New session 5 of user core. Jan 29 12:02:58.790669 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:02:58.851162 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:02:58.851676 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:58.875962 sudo[1698]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:58.878125 sshd[1691]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:58.887613 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:54030.service - OpenSSH per-connection server daemon (10.0.0.1:54030). Jan 29 12:02:58.888131 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:54020.service: Deactivated successfully. Jan 29 12:02:58.890756 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:02:58.890822 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:02:58.892352 systemd-logind[1523]: Removed session 5. Jan 29 12:02:58.922271 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 54030 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:58.924187 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:58.928234 systemd-logind[1523]: New session 6 of user core. Jan 29 12:02:58.943679 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:02:58.999801 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:02:59.000261 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:59.005029 sudo[1708]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:59.011419 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:02:59.011764 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:59.033636 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:59.035751 auditctl[1711]: No rules Jan 29 12:02:59.037065 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:02:59.037422 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:59.039455 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:59.070293 augenrules[1730]: No rules Jan 29 12:02:59.072171 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:59.074002 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:59.075921 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:59.092695 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:54036.service - OpenSSH per-connection server daemon (10.0.0.1:54036). Jan 29 12:02:59.093495 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:54030.service: Deactivated successfully. Jan 29 12:02:59.095742 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:02:59.096574 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:02:59.098309 systemd-logind[1523]: Removed session 6. Jan 29 12:02:59.122966 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 54036 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:59.124536 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:59.128877 systemd-logind[1523]: New session 7 of user core. Jan 29 12:02:59.138783 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:02:59.191496 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:02:59.191912 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:59.212688 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:02:59.232665 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:02:59.233001 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:02:59.720783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:59.730673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:59.750897 systemd[1]: Reloading requested from client PID 1794 ('systemctl') (unit session-7.scope)... Jan 29 12:02:59.750916 systemd[1]: Reloading... Jan 29 12:02:59.820214 zram_generator::config[1835]: No configuration found. Jan 29 12:02:59.998066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:00.068719 systemd[1]: Reloading finished in 317 ms. Jan 29 12:03:00.109312 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:03:00.109454 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:03:00.109903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:00.113135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:00.261419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:00.266342 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:03:00.308353 kubelet[1893]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:00.308353 kubelet[1893]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:03:00.308353 kubelet[1893]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:00.309238 kubelet[1893]: I0129 12:03:00.309185 1893 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:03:00.437074 kubelet[1893]: I0129 12:03:00.437015 1893 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:03:00.437074 kubelet[1893]: I0129 12:03:00.437060 1893 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:03:00.437291 kubelet[1893]: I0129 12:03:00.437269 1893 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:03:00.450131 kubelet[1893]: I0129 12:03:00.450099 1893 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:03:00.461293 kubelet[1893]: I0129 12:03:00.461255 1893 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:03:00.462612 kubelet[1893]: I0129 12:03:00.462556 1893 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:03:00.462800 kubelet[1893]: I0129 12:03:00.462596 1893 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:03:00.463203 kubelet[1893]: I0129 12:03:00.463175 1893 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:03:00.463203 kubelet[1893]: I0129 12:03:00.463197 1893 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:03:00.463371 kubelet[1893]: I0129 12:03:00.463344 1893 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:00.464010 kubelet[1893]: I0129 12:03:00.463984 1893 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:03:00.464010 kubelet[1893]: I0129 12:03:00.464002 1893 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:03:00.464060 kubelet[1893]: I0129 12:03:00.464024 1893 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:03:00.464060 kubelet[1893]: I0129 12:03:00.464044 1893 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:03:00.464100 kubelet[1893]: E0129 12:03:00.464068 1893 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:00.464355 kubelet[1893]: E0129 12:03:00.464338 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:00.467809 kubelet[1893]: I0129 12:03:00.467784 1893 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:03:00.468966 kubelet[1893]: I0129 12:03:00.468947 1893 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:03:00.469051 kubelet[1893]: W0129 12:03:00.469000 1893 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:03:00.470082 kubelet[1893]: I0129 12:03:00.470057 1893 server.go:1264] "Started kubelet" Jan 29 12:03:00.470267 kubelet[1893]: I0129 12:03:00.470232 1893 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:03:00.471540 kubelet[1893]: I0129 12:03:00.471456 1893 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:03:00.471697 kubelet[1893]: I0129 12:03:00.471678 1893 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:03:00.472540 kubelet[1893]: I0129 12:03:00.471792 1893 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:03:00.473928 kubelet[1893]: W0129 12:03:00.473676 1893 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 12:03:00.473928 kubelet[1893]: E0129 12:03:00.473712 1893 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 12:03:00.473928 kubelet[1893]: W0129 12:03:00.473785 1893 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 12:03:00.473928 kubelet[1893]: E0129 12:03:00.473801 1893 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 12:03:00.474084 kubelet[1893]: I0129 12:03:00.473960 1893 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:03:00.476619 kubelet[1893]: I0129 12:03:00.476577 1893 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:03:00.476784 kubelet[1893]: I0129 12:03:00.476758 1893 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:03:00.476893 kubelet[1893]: I0129 12:03:00.476860 1893 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:03:00.478077 kubelet[1893]: W0129 12:03:00.478043 1893 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 12:03:00.478077 kubelet[1893]: E0129 12:03:00.478072 1893 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 12:03:00.478268 kubelet[1893]: E0129 12:03:00.478219 1893 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.142\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 12:03:00.478667 kubelet[1893]: I0129 12:03:00.478618 1893 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:03:00.481349 kubelet[1893]: I0129 12:03:00.480958 1893 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:03:00.481349 kubelet[1893]: I0129 12:03:00.480977 1893 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:03:00.492510 kubelet[1893]: E0129 12:03:00.492480 1893 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:03:00.495134 kubelet[1893]: E0129 12:03:00.495006 1893 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.181f2832f27c9a4c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2025-01-29 12:03:00.470028876 +0000 UTC m=+0.199676284,LastTimestamp:2025-01-29 12:03:00.470028876 +0000 UTC m=+0.199676284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Jan 29 12:03:00.501065 kubelet[1893]: I0129 12:03:00.501034 1893 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:03:00.501065 kubelet[1893]: I0129 12:03:00.501058 1893 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:03:00.501164 kubelet[1893]: I0129 12:03:00.501076 1893 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:00.501465 kubelet[1893]: E0129 12:03:00.501210 1893 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.181f2832f3d2d373 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2025-01-29 12:03:00.492456819 +0000 UTC m=+0.222104227,LastTimestamp:2025-01-29 12:03:00.492456819 +0000 UTC m=+0.222104227,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Jan 29 12:03:00.577663 kubelet[1893]: I0129 12:03:00.577530 1893 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.142" Jan 29 12:03:01.120334 kubelet[1893]: I0129 12:03:01.120275 1893 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.142" Jan 29 12:03:01.122137 kubelet[1893]: I0129 12:03:01.122051 1893 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 12:03:01.122616 containerd[1542]: time="2025-01-29T12:03:01.122540566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:03:01.123098 kubelet[1893]: I0129 12:03:01.122809 1893 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 12:03:01.128848 kubelet[1893]: I0129 12:03:01.128814 1893 policy_none.go:49] "None policy: Start" Jan 29 12:03:01.129570 kubelet[1893]: I0129 12:03:01.129526 1893 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:03:01.129570 kubelet[1893]: I0129 12:03:01.129570 1893 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:03:01.135969 kubelet[1893]: E0129 12:03:01.135925 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:01.140163 kubelet[1893]: I0129 12:03:01.140119 1893 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:03:01.140357 kubelet[1893]: I0129 12:03:01.140319 1893 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:03:01.140482 kubelet[1893]: I0129 12:03:01.140460 1893 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:03:01.142883 kubelet[1893]: E0129 12:03:01.142848 1893 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.142\" not found" Jan 29 12:03:01.146536 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:01.148678 sshd[1737]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:01.150202 kubelet[1893]: I0129 12:03:01.150148 1893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:03:01.151340 kubelet[1893]: I0129 12:03:01.151318 1893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:03:01.151340 kubelet[1893]: I0129 12:03:01.151339 1893 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:03:01.151534 kubelet[1893]: I0129 12:03:01.151357 1893 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:03:01.151534 kubelet[1893]: E0129 12:03:01.151483 1893 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 12:03:01.152896 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:54036.service: Deactivated successfully. Jan 29 12:03:01.157188 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:03:01.157808 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:03:01.158698 systemd-logind[1523]: Removed session 7. Jan 29 12:03:01.236751 kubelet[1893]: E0129 12:03:01.236668 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:01.337257 kubelet[1893]: E0129 12:03:01.337188 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:01.438285 kubelet[1893]: E0129 12:03:01.438044 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:01.439439 kubelet[1893]: I0129 12:03:01.439302 1893 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 12:03:01.439569 kubelet[1893]: W0129 12:03:01.439528 1893 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:03:01.464574 kubelet[1893]: E0129 12:03:01.464515 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:01.538929 kubelet[1893]: E0129 12:03:01.538880 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:01.639714 kubelet[1893]: E0129 12:03:01.639649 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:01.740528 kubelet[1893]: E0129 12:03:01.740294 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:01.841104 kubelet[1893]: E0129 12:03:01.841015 1893 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Jan 29 12:03:02.465323 kubelet[1893]: I0129 12:03:02.465281 1893 apiserver.go:52] "Watching apiserver" Jan 29 12:03:02.465323 kubelet[1893]: E0129 12:03:02.465301 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:02.468744 kubelet[1893]: I0129 12:03:02.468678 1893 topology_manager.go:215] "Topology Admit Handler" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" podNamespace="kube-system" podName="cilium-kl4xg" Jan 29 12:03:02.468903 kubelet[1893]: I0129 12:03:02.468822 1893 topology_manager.go:215] "Topology Admit Handler" podUID="f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba" podNamespace="kube-system" podName="kube-proxy-v4nrb" Jan 29 12:03:02.477145 kubelet[1893]: I0129 12:03:02.477110 1893 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:03:02.488530 kubelet[1893]: I0129 12:03:02.488475 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-bpf-maps\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488530 kubelet[1893]: I0129 12:03:02.488507 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hostproc\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488530 kubelet[1893]: I0129 12:03:02.488527 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-etc-cni-netd\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488530 kubelet[1893]: I0129 12:03:02.488542 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba-lib-modules\") pod \"kube-proxy-v4nrb\" (UID: \"f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba\") " pod="kube-system/kube-proxy-v4nrb" Jan 29 12:03:02.488741 kubelet[1893]: I0129 12:03:02.488566 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-run\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488741 kubelet[1893]: I0129 12:03:02.488580 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cni-path\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488741 kubelet[1893]: I0129 12:03:02.488593 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-lib-modules\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488741 kubelet[1893]: I0129 12:03:02.488621 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-config-path\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488741 kubelet[1893]: I0129 12:03:02.488645 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-kernel\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488741 kubelet[1893]: I0129 12:03:02.488659 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba-kube-proxy\") pod \"kube-proxy-v4nrb\" (UID: \"f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba\") " pod="kube-system/kube-proxy-v4nrb" Jan 29 12:03:02.488913 kubelet[1893]: I0129 12:03:02.488673 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba-xtables-lock\") pod \"kube-proxy-v4nrb\" (UID: \"f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba\") " pod="kube-system/kube-proxy-v4nrb" Jan 29 12:03:02.488913 kubelet[1893]: I0129 12:03:02.488688 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-net\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488913 kubelet[1893]: I0129 12:03:02.488733 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hubble-tls\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.488913 kubelet[1893]: I0129 12:03:02.488764 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66b4\" (UniqueName: \"kubernetes.io/projected/f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba-kube-api-access-t66b4\") pod \"kube-proxy-v4nrb\" (UID: \"f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba\") " pod="kube-system/kube-proxy-v4nrb" Jan 29 12:03:02.488913 kubelet[1893]: I0129 12:03:02.488785 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-clustermesh-secrets\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.489047 kubelet[1893]: I0129 12:03:02.488806 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9rmm\" (UniqueName: \"kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-kube-api-access-f9rmm\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.489047 kubelet[1893]: I0129 12:03:02.488836 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-cgroup\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.489047 kubelet[1893]: I0129 12:03:02.488868 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-xtables-lock\") pod \"cilium-kl4xg\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " pod="kube-system/cilium-kl4xg" Jan 29 12:03:02.774939 kubelet[1893]: E0129 12:03:02.774794 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:02.774939 kubelet[1893]: E0129 12:03:02.774815 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:02.775699 containerd[1542]: time="2025-01-29T12:03:02.775660114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4nrb,Uid:f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:02.776066 containerd[1542]: time="2025-01-29T12:03:02.775760142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kl4xg,Uid:6c5bf952-7b29-4cb8-8ebb-7df04efe9abe,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:03.407807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883913289.mount: Deactivated successfully. Jan 29 12:03:03.416901 containerd[1542]: time="2025-01-29T12:03:03.416841048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:03.417915 containerd[1542]: time="2025-01-29T12:03:03.417879045Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:03.418588 containerd[1542]: time="2025-01-29T12:03:03.418512082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 12:03:03.419479 containerd[1542]: time="2025-01-29T12:03:03.419398393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:03:03.420526 containerd[1542]: time="2025-01-29T12:03:03.420472187Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:03.423171 containerd[1542]: time="2025-01-29T12:03:03.423115453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:03.423938 containerd[1542]: time="2025-01-29T12:03:03.423891117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.115436ms" Jan 29 12:03:03.426119 containerd[1542]: time="2025-01-29T12:03:03.426077988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.209813ms" Jan 29 12:03:03.466022 kubelet[1893]: E0129 12:03:03.465966 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:03.519677 containerd[1542]: time="2025-01-29T12:03:03.519486098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:03.519677 containerd[1542]: time="2025-01-29T12:03:03.519618666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:03.519677 containerd[1542]: time="2025-01-29T12:03:03.519634566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:03.519872 containerd[1542]: time="2025-01-29T12:03:03.519753118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:03.520070 containerd[1542]: time="2025-01-29T12:03:03.519644906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:03.520070 containerd[1542]: time="2025-01-29T12:03:03.519947423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:03.520070 containerd[1542]: time="2025-01-29T12:03:03.519961830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:03.520239 containerd[1542]: time="2025-01-29T12:03:03.520183636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:03.612862 containerd[1542]: time="2025-01-29T12:03:03.612822253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4nrb,Uid:f2cb9a9a-95e9-4163-8dd6-d4de4b9d15ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b3cd94e0b5c38c5063125a51463e868cab86c5750225d850594b62854621ea\"" Jan 29 12:03:03.613114 containerd[1542]: time="2025-01-29T12:03:03.612908314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kl4xg,Uid:6c5bf952-7b29-4cb8-8ebb-7df04efe9abe,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\"" Jan 29 12:03:03.614287 kubelet[1893]: E0129 12:03:03.613996 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:03.614287 kubelet[1893]: E0129 12:03:03.614017 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:03.615952 containerd[1542]: time="2025-01-29T12:03:03.615919430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:03:04.467016 kubelet[1893]: E0129 12:03:04.466981 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:05.468126 kubelet[1893]: E0129 12:03:05.468060 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:06.468447 kubelet[1893]: E0129 12:03:06.468229 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:07.307642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822713957.mount: Deactivated successfully. Jan 29 12:03:07.469539 kubelet[1893]: E0129 12:03:07.469484 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:08.470250 kubelet[1893]: E0129 12:03:08.470216 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:09.470373 kubelet[1893]: E0129 12:03:09.470297 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:10.471357 kubelet[1893]: E0129 12:03:10.471309 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:11.472511 kubelet[1893]: E0129 12:03:11.472456 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:12.473190 kubelet[1893]: E0129 12:03:12.473139 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:12.529072 containerd[1542]: time="2025-01-29T12:03:12.529010834Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:12.529995 containerd[1542]: time="2025-01-29T12:03:12.529957389Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 12:03:12.531440 containerd[1542]: time="2025-01-29T12:03:12.531357685Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:12.532973 containerd[1542]: time="2025-01-29T12:03:12.532933630Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.916984574s" Jan 29 12:03:12.532973 containerd[1542]: time="2025-01-29T12:03:12.532963926Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 12:03:12.533867 containerd[1542]: time="2025-01-29T12:03:12.533840740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:03:12.535430 containerd[1542]: time="2025-01-29T12:03:12.535394564Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:03:12.548821 containerd[1542]: time="2025-01-29T12:03:12.548778809Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\"" Jan 29 12:03:12.549524 containerd[1542]: time="2025-01-29T12:03:12.549472149Z" level=info msg="StartContainer for \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\"" Jan 29 12:03:12.600929 containerd[1542]: time="2025-01-29T12:03:12.600878296Z" level=info msg="StartContainer for \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\" returns successfully" Jan 29 12:03:13.169530 kubelet[1893]: E0129 12:03:13.169506 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:13.258280 containerd[1542]: time="2025-01-29T12:03:13.258225462Z" level=info msg="shim disconnected" id=7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869 namespace=k8s.io Jan 29 12:03:13.258280 containerd[1542]: time="2025-01-29T12:03:13.258274343Z" level=warning msg="cleaning up after shim disconnected" id=7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869 namespace=k8s.io Jan 29 12:03:13.258280 containerd[1542]: time="2025-01-29T12:03:13.258283160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:13.473546 kubelet[1893]: E0129 12:03:13.473384 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:13.544548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869-rootfs.mount: Deactivated successfully. Jan 29 12:03:14.174725 kubelet[1893]: E0129 12:03:14.174683 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:14.176559 containerd[1542]: time="2025-01-29T12:03:14.176514784Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:03:14.195558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623580361.mount: Deactivated successfully. Jan 29 12:03:14.198411 containerd[1542]: time="2025-01-29T12:03:14.198349604Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\"" Jan 29 12:03:14.198718 containerd[1542]: time="2025-01-29T12:03:14.198687778Z" level=info msg="StartContainer for \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\"" Jan 29 12:03:14.209915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156930283.mount: Deactivated successfully. Jan 29 12:03:14.251859 containerd[1542]: time="2025-01-29T12:03:14.251811655Z" level=info msg="StartContainer for \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\" returns successfully" Jan 29 12:03:14.261396 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:03:14.261748 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:14.262491 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:14.268845 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:14.285966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:14.473780 containerd[1542]: time="2025-01-29T12:03:14.473444878Z" level=info msg="shim disconnected" id=441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9 namespace=k8s.io Jan 29 12:03:14.473780 containerd[1542]: time="2025-01-29T12:03:14.473560224Z" level=warning msg="cleaning up after shim disconnected" id=441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9 namespace=k8s.io Jan 29 12:03:14.473780 containerd[1542]: time="2025-01-29T12:03:14.473570273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:14.474609 kubelet[1893]: E0129 12:03:14.474377 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:14.504272 containerd[1542]: time="2025-01-29T12:03:14.504216003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:14.505088 containerd[1542]: time="2025-01-29T12:03:14.505050087Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 12:03:14.506255 containerd[1542]: time="2025-01-29T12:03:14.506204191Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:14.508494 containerd[1542]: time="2025-01-29T12:03:14.508453709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:14.509067 containerd[1542]: time="2025-01-29T12:03:14.509024930Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.975153392s" Jan 29 12:03:14.509112 containerd[1542]: time="2025-01-29T12:03:14.509066999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:03:14.511187 containerd[1542]: time="2025-01-29T12:03:14.511162658Z" level=info msg="CreateContainer within sandbox \"e4b3cd94e0b5c38c5063125a51463e868cab86c5750225d850594b62854621ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:03:14.525972 containerd[1542]: time="2025-01-29T12:03:14.525907435Z" level=info msg="CreateContainer within sandbox \"e4b3cd94e0b5c38c5063125a51463e868cab86c5750225d850594b62854621ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39f94dcb59272c22431cba2543f1d74f0411680db531a279e2738c25e307433a\"" Jan 29 12:03:14.526609 containerd[1542]: time="2025-01-29T12:03:14.526567983Z" level=info msg="StartContainer for \"39f94dcb59272c22431cba2543f1d74f0411680db531a279e2738c25e307433a\"" Jan 29 12:03:14.545665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9-rootfs.mount: Deactivated successfully. Jan 29 12:03:14.553589 systemd[1]: run-containerd-runc-k8s.io-39f94dcb59272c22431cba2543f1d74f0411680db531a279e2738c25e307433a-runc.m7Dter.mount: Deactivated successfully. Jan 29 12:03:14.631532 containerd[1542]: time="2025-01-29T12:03:14.631478220Z" level=info msg="StartContainer for \"39f94dcb59272c22431cba2543f1d74f0411680db531a279e2738c25e307433a\" returns successfully" Jan 29 12:03:15.177075 kubelet[1893]: E0129 12:03:15.176923 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:15.178713 kubelet[1893]: E0129 12:03:15.178676 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:15.180392 containerd[1542]: time="2025-01-29T12:03:15.180354483Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:03:15.184844 kubelet[1893]: I0129 12:03:15.184797 1893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v4nrb" podStartSLOduration=4.289734353 podStartE2EDuration="15.184784018s" podCreationTimestamp="2025-01-29 12:03:00 +0000 UTC" firstStartedPulling="2025-01-29 12:03:03.614808948 +0000 UTC m=+3.344456356" lastFinishedPulling="2025-01-29 12:03:14.509858613 +0000 UTC m=+14.239506021" observedRunningTime="2025-01-29 12:03:15.184560048 +0000 UTC m=+14.914207456" watchObservedRunningTime="2025-01-29 12:03:15.184784018 +0000 UTC m=+14.914431426" Jan 29 12:03:15.197839 containerd[1542]: time="2025-01-29T12:03:15.197783723Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\"" Jan 29 12:03:15.198469 containerd[1542]: time="2025-01-29T12:03:15.198352680Z" level=info msg="StartContainer for \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\"" Jan 29 12:03:15.268452 containerd[1542]: time="2025-01-29T12:03:15.268382415Z" level=info msg="StartContainer for \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\" returns successfully" Jan 29 12:03:15.441011 containerd[1542]: time="2025-01-29T12:03:15.440860341Z" level=info msg="shim disconnected" id=6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be namespace=k8s.io Jan 29 12:03:15.441011 containerd[1542]: time="2025-01-29T12:03:15.440914673Z" level=warning msg="cleaning up after shim disconnected" id=6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be namespace=k8s.io Jan 29 12:03:15.441011 containerd[1542]: time="2025-01-29T12:03:15.440924932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:15.474897 kubelet[1893]: E0129 12:03:15.474831 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:16.182589 kubelet[1893]: E0129 12:03:16.182513 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:16.182589 kubelet[1893]: E0129 12:03:16.182513 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:16.184312 containerd[1542]: time="2025-01-29T12:03:16.184251850Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:03:16.201760 containerd[1542]: time="2025-01-29T12:03:16.201715084Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\"" Jan 29 12:03:16.202271 containerd[1542]: time="2025-01-29T12:03:16.202226563Z" level=info msg="StartContainer for \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\"" Jan 29 12:03:16.259169 containerd[1542]: time="2025-01-29T12:03:16.259136198Z" level=info msg="StartContainer for \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\" returns successfully" Jan 29 12:03:16.282007 containerd[1542]: time="2025-01-29T12:03:16.281952128Z" level=info msg="shim disconnected" id=d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5 namespace=k8s.io Jan 29 12:03:16.282216 containerd[1542]: time="2025-01-29T12:03:16.282187540Z" level=warning msg="cleaning up after shim disconnected" id=d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5 namespace=k8s.io Jan 29 12:03:16.282216 containerd[1542]: time="2025-01-29T12:03:16.282203029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:16.475106 kubelet[1893]: E0129 12:03:16.474951 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:16.545220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5-rootfs.mount: Deactivated successfully. Jan 29 12:03:17.185118 kubelet[1893]: E0129 12:03:17.185089 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:17.187010 containerd[1542]: time="2025-01-29T12:03:17.186976549Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:03:17.202659 containerd[1542]: time="2025-01-29T12:03:17.202630610Z" level=info msg="CreateContainer within sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\"" Jan 29 12:03:17.203038 containerd[1542]: time="2025-01-29T12:03:17.203014760Z" level=info msg="StartContainer for \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\"" Jan 29 12:03:17.255855 containerd[1542]: time="2025-01-29T12:03:17.255802137Z" level=info msg="StartContainer for \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\" returns successfully" Jan 29 12:03:17.320620 kubelet[1893]: I0129 12:03:17.320589 1893 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:03:17.475732 kubelet[1893]: E0129 12:03:17.475612 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:17.701461 kernel: Initializing XFRM netlink socket Jan 29 12:03:18.188730 kubelet[1893]: E0129 12:03:18.188691 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:18.201378 kubelet[1893]: I0129 12:03:18.201287 1893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kl4xg" podStartSLOduration=9.282036769 podStartE2EDuration="18.201261122s" podCreationTimestamp="2025-01-29 12:03:00 +0000 UTC" firstStartedPulling="2025-01-29 12:03:03.614515608 +0000 UTC m=+3.344163016" lastFinishedPulling="2025-01-29 12:03:12.533739961 +0000 UTC m=+12.263387369" observedRunningTime="2025-01-29 12:03:18.200984604 +0000 UTC m=+17.930632012" watchObservedRunningTime="2025-01-29 12:03:18.201261122 +0000 UTC m=+17.930908530" Jan 29 12:03:18.476834 kubelet[1893]: E0129 12:03:18.476688 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:19.189532 kubelet[1893]: E0129 12:03:19.189493 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:19.367791 systemd-networkd[1240]: cilium_host: Link UP Jan 29 12:03:19.368008 systemd-networkd[1240]: cilium_net: Link UP Jan 29 12:03:19.368195 systemd-networkd[1240]: cilium_net: Gained carrier Jan 29 12:03:19.368372 systemd-networkd[1240]: cilium_host: Gained carrier Jan 29 12:03:19.382579 systemd-networkd[1240]: cilium_host: Gained IPv6LL Jan 29 12:03:19.469030 systemd-networkd[1240]: cilium_vxlan: Link UP Jan 29 12:03:19.469039 systemd-networkd[1240]: cilium_vxlan: Gained carrier Jan 29 12:03:19.477050 kubelet[1893]: E0129 12:03:19.477002 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:19.671440 kernel: NET: Registered PF_ALG protocol family Jan 29 12:03:19.941579 systemd-networkd[1240]: cilium_net: Gained IPv6LL Jan 29 12:03:20.191211 kubelet[1893]: E0129 12:03:20.191137 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:20.303814 systemd-networkd[1240]: lxc_health: Link UP Jan 29 12:03:20.311699 systemd-networkd[1240]: lxc_health: Gained carrier Jan 29 12:03:20.464759 kubelet[1893]: E0129 12:03:20.464704 1893 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:20.478068 kubelet[1893]: E0129 12:03:20.478013 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:20.837560 systemd-networkd[1240]: cilium_vxlan: Gained IPv6LL Jan 29 12:03:21.192929 kubelet[1893]: E0129 12:03:21.192812 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:21.479189 kubelet[1893]: E0129 12:03:21.479027 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:21.669568 systemd-networkd[1240]: lxc_health: Gained IPv6LL Jan 29 12:03:22.194566 kubelet[1893]: E0129 12:03:22.194529 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:22.256635 kubelet[1893]: I0129 12:03:22.256591 1893 topology_manager.go:215] "Topology Admit Handler" podUID="f3a12efa-09ac-4f4a-b58f-36f746009796" podNamespace="default" podName="nginx-deployment-85f456d6dd-9qk9x" Jan 29 12:03:22.313889 kubelet[1893]: I0129 12:03:22.313835 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p45t\" (UniqueName: \"kubernetes.io/projected/f3a12efa-09ac-4f4a-b58f-36f746009796-kube-api-access-8p45t\") pod \"nginx-deployment-85f456d6dd-9qk9x\" (UID: \"f3a12efa-09ac-4f4a-b58f-36f746009796\") " pod="default/nginx-deployment-85f456d6dd-9qk9x" Jan 29 12:03:22.480185 kubelet[1893]: E0129 12:03:22.480038 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:22.561386 containerd[1542]: time="2025-01-29T12:03:22.561335124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-9qk9x,Uid:f3a12efa-09ac-4f4a-b58f-36f746009796,Namespace:default,Attempt:0,}" Jan 29 12:03:22.606056 systemd-networkd[1240]: lxc1199a9d5d13a: Link UP Jan 29 12:03:22.614435 kernel: eth0: renamed from tmpd41f3 Jan 29 12:03:22.624222 systemd-networkd[1240]: lxc1199a9d5d13a: Gained carrier Jan 29 12:03:23.481026 kubelet[1893]: E0129 12:03:23.480959 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:23.845587 systemd-networkd[1240]: lxc1199a9d5d13a: Gained IPv6LL Jan 29 12:03:24.481745 kubelet[1893]: E0129 12:03:24.481465 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:24.581808 containerd[1542]: time="2025-01-29T12:03:24.581727073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:24.581808 containerd[1542]: time="2025-01-29T12:03:24.581773983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:24.581808 containerd[1542]: time="2025-01-29T12:03:24.581784883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:24.582312 containerd[1542]: time="2025-01-29T12:03:24.581871870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:24.606260 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:03:24.629600 containerd[1542]: time="2025-01-29T12:03:24.629561382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-9qk9x,Uid:f3a12efa-09ac-4f4a-b58f-36f746009796,Namespace:default,Attempt:0,} returns sandbox id \"d41f326c1bfc3c1d4d00eeabd79ba03f5e003c55bc53603e93f8a1bb6b598766\"" Jan 29 12:03:24.631112 containerd[1542]: time="2025-01-29T12:03:24.631070444Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:03:25.481668 kubelet[1893]: E0129 12:03:25.481620 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:26.481996 kubelet[1893]: E0129 12:03:26.481949 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:27.482332 kubelet[1893]: E0129 12:03:27.482278 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:27.595531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670949344.mount: Deactivated successfully. Jan 29 12:03:28.483054 kubelet[1893]: E0129 12:03:28.482990 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:28.607245 containerd[1542]: time="2025-01-29T12:03:28.607182851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:28.607996 containerd[1542]: time="2025-01-29T12:03:28.607964882Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 12:03:28.609454 containerd[1542]: time="2025-01-29T12:03:28.609396632Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:28.612466 containerd[1542]: time="2025-01-29T12:03:28.612429544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:28.613251 containerd[1542]: time="2025-01-29T12:03:28.613225451Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 3.98211501s" Jan 29 12:03:28.613299 containerd[1542]: time="2025-01-29T12:03:28.613255008Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 12:03:28.615181 containerd[1542]: time="2025-01-29T12:03:28.615135002Z" level=info msg="CreateContainer within sandbox \"d41f326c1bfc3c1d4d00eeabd79ba03f5e003c55bc53603e93f8a1bb6b598766\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 12:03:28.627470 containerd[1542]: time="2025-01-29T12:03:28.627428680Z" level=info msg="CreateContainer within sandbox \"d41f326c1bfc3c1d4d00eeabd79ba03f5e003c55bc53603e93f8a1bb6b598766\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fce8574d9399634206950181a844f58d5a8da92400f4195e55a50b75132fa929\"" Jan 29 12:03:28.627848 containerd[1542]: time="2025-01-29T12:03:28.627825757Z" level=info msg="StartContainer for \"fce8574d9399634206950181a844f58d5a8da92400f4195e55a50b75132fa929\"" Jan 29 12:03:28.680556 containerd[1542]: time="2025-01-29T12:03:28.680510340Z" level=info msg="StartContainer for \"fce8574d9399634206950181a844f58d5a8da92400f4195e55a50b75132fa929\" returns successfully" Jan 29 12:03:29.213802 kubelet[1893]: I0129 12:03:29.213749 1893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-9qk9x" podStartSLOduration=3.230344965 podStartE2EDuration="7.213712448s" podCreationTimestamp="2025-01-29 12:03:22 +0000 UTC" firstStartedPulling="2025-01-29 12:03:24.630709252 +0000 UTC m=+24.360356660" lastFinishedPulling="2025-01-29 12:03:28.614076734 +0000 UTC m=+28.343724143" observedRunningTime="2025-01-29 12:03:29.213670006 +0000 UTC m=+28.943317414" watchObservedRunningTime="2025-01-29 12:03:29.213712448 +0000 UTC m=+28.943359856" Jan 29 12:03:29.483521 kubelet[1893]: E0129 12:03:29.483322 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:30.483715 kubelet[1893]: E0129 12:03:30.483659 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:31.484615 kubelet[1893]: E0129 12:03:31.484515 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:32.485205 kubelet[1893]: E0129 12:03:32.485140 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:33.485961 kubelet[1893]: E0129 12:03:33.485869 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:34.383058 kubelet[1893]: I0129 12:03:34.382995 1893 topology_manager.go:215] "Topology Admit Handler" podUID="678fb663-68b4-404b-9f95-997eff7d8f98" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 12:03:34.466175 update_engine[1527]: I20250129 12:03:34.466066 1527 update_attempter.cc:509] Updating boot flags... Jan 29 12:03:34.485863 kubelet[1893]: I0129 12:03:34.485816 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/678fb663-68b4-404b-9f95-997eff7d8f98-data\") pod \"nfs-server-provisioner-0\" (UID: \"678fb663-68b4-404b-9f95-997eff7d8f98\") " pod="default/nfs-server-provisioner-0" Jan 29 12:03:34.486046 kubelet[1893]: I0129 12:03:34.485868 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdql6\" (UniqueName: \"kubernetes.io/projected/678fb663-68b4-404b-9f95-997eff7d8f98-kube-api-access-kdql6\") pod \"nfs-server-provisioner-0\" (UID: \"678fb663-68b4-404b-9f95-997eff7d8f98\") " pod="default/nfs-server-provisioner-0" Jan 29 12:03:34.486046 kubelet[1893]: E0129 12:03:34.485932 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:34.490433 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3099) Jan 29 12:03:34.525178 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3090) Jan 29 12:03:34.554510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3090) Jan 29 12:03:34.688119 containerd[1542]: time="2025-01-29T12:03:34.687985134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:678fb663-68b4-404b-9f95-997eff7d8f98,Namespace:default,Attempt:0,}" Jan 29 12:03:34.716155 systemd-networkd[1240]: lxc054e22412438: Link UP Jan 29 12:03:34.730443 kernel: eth0: renamed from tmp670cc Jan 29 12:03:34.741031 systemd-networkd[1240]: lxc054e22412438: Gained carrier Jan 29 12:03:34.992626 containerd[1542]: time="2025-01-29T12:03:34.991991388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:34.992626 containerd[1542]: time="2025-01-29T12:03:34.992531593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:34.992626 containerd[1542]: time="2025-01-29T12:03:34.992549667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:34.992784 containerd[1542]: time="2025-01-29T12:03:34.992750839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:35.023240 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:03:35.047939 containerd[1542]: time="2025-01-29T12:03:35.047886916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:678fb663-68b4-404b-9f95-997eff7d8f98,Namespace:default,Attempt:0,} returns sandbox id \"670ccb590328c2e30f0143828221661825b0c1b6a7e978c6cc4e77d38c7dc2f1\"" Jan 29 12:03:35.049397 containerd[1542]: time="2025-01-29T12:03:35.049370387Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 12:03:35.486083 kubelet[1893]: E0129 12:03:35.486022 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:36.069582 systemd-networkd[1240]: lxc054e22412438: Gained IPv6LL Jan 29 12:03:36.486339 kubelet[1893]: E0129 12:03:36.486282 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:37.157435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042390437.mount: Deactivated successfully. Jan 29 12:03:37.487260 kubelet[1893]: E0129 12:03:37.487114 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:38.487566 kubelet[1893]: E0129 12:03:38.487490 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:39.487780 kubelet[1893]: E0129 12:03:39.487699 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:40.000296 containerd[1542]: time="2025-01-29T12:03:40.000218408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:40.001391 containerd[1542]: time="2025-01-29T12:03:40.001348233Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 12:03:40.002550 containerd[1542]: time="2025-01-29T12:03:40.002508115Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:40.005265 containerd[1542]: time="2025-01-29T12:03:40.005220009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:40.006205 containerd[1542]: time="2025-01-29T12:03:40.006166869Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.95675924s" Jan 29 12:03:40.006274 containerd[1542]: time="2025-01-29T12:03:40.006202976Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 12:03:40.008676 containerd[1542]: time="2025-01-29T12:03:40.008640423Z" level=info msg="CreateContainer within sandbox \"670ccb590328c2e30f0143828221661825b0c1b6a7e978c6cc4e77d38c7dc2f1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 12:03:40.021708 containerd[1542]: time="2025-01-29T12:03:40.021667858Z" level=info msg="CreateContainer within sandbox \"670ccb590328c2e30f0143828221661825b0c1b6a7e978c6cc4e77d38c7dc2f1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6ede21b0e5b46297618ab84611976d858f702281e8cf1be406d7127b5a9e8662\"" Jan 29 12:03:40.022120 containerd[1542]: time="2025-01-29T12:03:40.022082621Z" level=info msg="StartContainer for \"6ede21b0e5b46297618ab84611976d858f702281e8cf1be406d7127b5a9e8662\"" Jan 29 12:03:40.107379 containerd[1542]: time="2025-01-29T12:03:40.107331741Z" level=info msg="StartContainer for \"6ede21b0e5b46297618ab84611976d858f702281e8cf1be406d7127b5a9e8662\" returns successfully" Jan 29 12:03:40.234649 kubelet[1893]: I0129 12:03:40.234584 1893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.276506334 podStartE2EDuration="6.234566956s" podCreationTimestamp="2025-01-29 12:03:34 +0000 UTC" firstStartedPulling="2025-01-29 12:03:35.049017609 +0000 UTC m=+34.778665017" lastFinishedPulling="2025-01-29 12:03:40.007078231 +0000 UTC m=+39.736725639" observedRunningTime="2025-01-29 12:03:40.234478448 +0000 UTC m=+39.964125856" watchObservedRunningTime="2025-01-29 12:03:40.234566956 +0000 UTC m=+39.964214364" Jan 29 12:03:40.465079 kubelet[1893]: E0129 12:03:40.465050 1893 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:40.489417 kubelet[1893]: E0129 12:03:40.489377 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:41.490568 kubelet[1893]: E0129 12:03:41.490486 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:42.491611 kubelet[1893]: E0129 12:03:42.491543 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:43.492108 kubelet[1893]: E0129 12:03:43.492036 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:44.493081 kubelet[1893]: E0129 12:03:44.493029 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:45.494216 kubelet[1893]: E0129 12:03:45.494152 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:46.494872 kubelet[1893]: E0129 12:03:46.494816 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:47.495327 kubelet[1893]: E0129 12:03:47.495279 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:48.495540 kubelet[1893]: E0129 12:03:48.495464 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:49.418335 kubelet[1893]: I0129 12:03:49.418277 1893 topology_manager.go:215] "Topology Admit Handler" podUID="a7fc09c7-d4f3-4f0d-b3bf-27c720ed6a5d" podNamespace="default" podName="test-pod-1" Jan 29 12:03:49.468500 kubelet[1893]: I0129 12:03:49.468464 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28857783-74a9-472d-8256-d741a5fa092b\" (UniqueName: \"kubernetes.io/nfs/a7fc09c7-d4f3-4f0d-b3bf-27c720ed6a5d-pvc-28857783-74a9-472d-8256-d741a5fa092b\") pod \"test-pod-1\" (UID: \"a7fc09c7-d4f3-4f0d-b3bf-27c720ed6a5d\") " pod="default/test-pod-1" Jan 29 12:03:49.468500 kubelet[1893]: I0129 12:03:49.468499 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjqf\" (UniqueName: \"kubernetes.io/projected/a7fc09c7-d4f3-4f0d-b3bf-27c720ed6a5d-kube-api-access-gtjqf\") pod \"test-pod-1\" (UID: \"a7fc09c7-d4f3-4f0d-b3bf-27c720ed6a5d\") " pod="default/test-pod-1" Jan 29 12:03:49.496287 kubelet[1893]: E0129 12:03:49.496240 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:49.598431 kernel: FS-Cache: Loaded Jan 29 12:03:49.666585 kernel: RPC: Registered named UNIX socket transport module. Jan 29 12:03:49.666627 kernel: RPC: Registered udp transport module. Jan 29 12:03:49.666649 kernel: RPC: Registered tcp transport module. Jan 29 12:03:49.667859 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 12:03:49.667883 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 12:03:49.950631 kernel: NFS: Registering the id_resolver key type Jan 29 12:03:49.950713 kernel: Key type id_resolver registered Jan 29 12:03:49.950736 kernel: Key type id_legacy registered Jan 29 12:03:49.976839 nfsidmap[3286]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 12:03:49.981610 nfsidmap[3289]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 12:03:50.022664 containerd[1542]: time="2025-01-29T12:03:50.022623706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a7fc09c7-d4f3-4f0d-b3bf-27c720ed6a5d,Namespace:default,Attempt:0,}" Jan 29 12:03:50.047625 systemd-networkd[1240]: lxcd25b2eab2ee5: Link UP Jan 29 12:03:50.057439 kernel: eth0: renamed from tmp13000 Jan 29 12:03:50.063850 systemd-networkd[1240]: lxcd25b2eab2ee5: Gained carrier Jan 29 12:03:50.252715 containerd[1542]: time="2025-01-29T12:03:50.252456612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:50.253432 containerd[1542]: time="2025-01-29T12:03:50.253346498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:50.253432 containerd[1542]: time="2025-01-29T12:03:50.253392154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:50.253594 containerd[1542]: time="2025-01-29T12:03:50.253516899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:50.278130 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:03:50.302926 containerd[1542]: time="2025-01-29T12:03:50.302889638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a7fc09c7-d4f3-4f0d-b3bf-27c720ed6a5d,Namespace:default,Attempt:0,} returns sandbox id \"130009cd60cfe8beee95942ed0848d154c259c2a781ecc7b402125b5b761d851\"" Jan 29 12:03:50.304343 containerd[1542]: time="2025-01-29T12:03:50.304312026Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:03:50.496681 kubelet[1893]: E0129 12:03:50.496642 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:50.668381 containerd[1542]: time="2025-01-29T12:03:50.668328520Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:50.669326 containerd[1542]: time="2025-01-29T12:03:50.669266285Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 12:03:50.672935 containerd[1542]: time="2025-01-29T12:03:50.672893394Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 368.544949ms" Jan 29 12:03:50.672935 containerd[1542]: time="2025-01-29T12:03:50.672933760Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 12:03:50.674816 containerd[1542]: time="2025-01-29T12:03:50.674785556Z" level=info msg="CreateContainer within sandbox \"130009cd60cfe8beee95942ed0848d154c259c2a781ecc7b402125b5b761d851\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 12:03:50.686930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3563153699.mount: Deactivated successfully. Jan 29 12:03:50.689513 containerd[1542]: time="2025-01-29T12:03:50.689479960Z" level=info msg="CreateContainer within sandbox \"130009cd60cfe8beee95942ed0848d154c259c2a781ecc7b402125b5b761d851\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c39939b3d80ac68d77aaeb3beb0740932ed5f61c6dc11c122534624b9fd40c3b\"" Jan 29 12:03:50.690112 containerd[1542]: time="2025-01-29T12:03:50.689869563Z" level=info msg="StartContainer for \"c39939b3d80ac68d77aaeb3beb0740932ed5f61c6dc11c122534624b9fd40c3b\"" Jan 29 12:03:50.740914 containerd[1542]: time="2025-01-29T12:03:50.740876730Z" level=info msg="StartContainer for \"c39939b3d80ac68d77aaeb3beb0740932ed5f61c6dc11c122534624b9fd40c3b\" returns successfully" Jan 29 12:03:51.301573 systemd-networkd[1240]: lxcd25b2eab2ee5: Gained IPv6LL Jan 29 12:03:51.497011 kubelet[1893]: E0129 12:03:51.496950 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:52.498080 kubelet[1893]: E0129 12:03:52.498011 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:53.498720 kubelet[1893]: E0129 12:03:53.498648 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:54.499645 kubelet[1893]: E0129 12:03:54.499591 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:55.500664 kubelet[1893]: E0129 12:03:55.500602 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:56.501706 kubelet[1893]: E0129 12:03:56.501657 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:56.810311 kubelet[1893]: I0129 12:03:56.810178 1893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.440628499 podStartE2EDuration="22.810161989s" podCreationTimestamp="2025-01-29 12:03:34 +0000 UTC" firstStartedPulling="2025-01-29 12:03:50.30401653 +0000 UTC m=+50.033663938" lastFinishedPulling="2025-01-29 12:03:50.67355002 +0000 UTC m=+50.403197428" observedRunningTime="2025-01-29 12:03:51.254849609 +0000 UTC m=+50.984497007" watchObservedRunningTime="2025-01-29 12:03:56.810161989 +0000 UTC m=+56.539809397" Jan 29 12:03:56.838270 containerd[1542]: time="2025-01-29T12:03:56.838205967Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:03:56.845135 containerd[1542]: time="2025-01-29T12:03:56.845104318Z" level=info msg="StopContainer for \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\" with timeout 2 (s)" Jan 29 12:03:56.845303 containerd[1542]: time="2025-01-29T12:03:56.845285238Z" level=info msg="Stop container \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\" with signal terminated" Jan 29 12:03:56.853306 systemd-networkd[1240]: lxc_health: Link DOWN Jan 29 12:03:56.853317 systemd-networkd[1240]: lxc_health: Lost carrier Jan 29 12:03:56.899070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e-rootfs.mount: Deactivated successfully. Jan 29 12:03:56.910666 containerd[1542]: time="2025-01-29T12:03:56.910587126Z" level=info msg="shim disconnected" id=a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e namespace=k8s.io Jan 29 12:03:56.910666 containerd[1542]: time="2025-01-29T12:03:56.910657680Z" level=warning msg="cleaning up after shim disconnected" id=a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e namespace=k8s.io Jan 29 12:03:56.910666 containerd[1542]: time="2025-01-29T12:03:56.910671395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:56.928751 containerd[1542]: time="2025-01-29T12:03:56.928709049Z" level=info msg="StopContainer for \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\" returns successfully" Jan 29 12:03:56.929438 containerd[1542]: time="2025-01-29T12:03:56.929381964Z" level=info msg="StopPodSandbox for \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\"" Jan 29 12:03:56.929438 containerd[1542]: time="2025-01-29T12:03:56.929439082Z" level=info msg="Container to stop \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.929619 containerd[1542]: time="2025-01-29T12:03:56.929457035Z" level=info msg="Container to stop \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.929619 containerd[1542]: time="2025-01-29T12:03:56.929469018Z" level=info msg="Container to stop \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.929619 containerd[1542]: time="2025-01-29T12:03:56.929480139Z" level=info msg="Container to stop \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.929619 containerd[1542]: time="2025-01-29T12:03:56.929491099Z" level=info msg="Container to stop \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:03:56.931706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9-shm.mount: Deactivated successfully. Jan 29 12:03:56.951183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9-rootfs.mount: Deactivated successfully. Jan 29 12:03:56.957497 containerd[1542]: time="2025-01-29T12:03:56.957428196Z" level=info msg="shim disconnected" id=b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9 namespace=k8s.io Jan 29 12:03:56.957497 containerd[1542]: time="2025-01-29T12:03:56.957487587Z" level=warning msg="cleaning up after shim disconnected" id=b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9 namespace=k8s.io Jan 29 12:03:56.957497 containerd[1542]: time="2025-01-29T12:03:56.957499259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:56.971155 containerd[1542]: time="2025-01-29T12:03:56.971097968Z" level=info msg="TearDown network for sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" successfully" Jan 29 12:03:56.971155 containerd[1542]: time="2025-01-29T12:03:56.971136670Z" level=info msg="StopPodSandbox for \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" returns successfully" Jan 29 12:03:57.011938 kubelet[1893]: I0129 12:03:57.011876 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hostproc\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.011938 kubelet[1893]: I0129 12:03:57.011938 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.012188 kubelet[1893]: I0129 12:03:57.011995 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-bpf-maps\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012188 kubelet[1893]: I0129 12:03:57.012046 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-etc-cni-netd\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012188 kubelet[1893]: I0129 12:03:57.012064 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cni-path\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012188 kubelet[1893]: I0129 12:03:57.012086 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9rmm\" (UniqueName: \"kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-kube-api-access-f9rmm\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012188 kubelet[1893]: I0129 12:03:57.012101 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-kernel\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012188 kubelet[1893]: I0129 12:03:57.012117 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-net\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012396 kubelet[1893]: I0129 12:03:57.012134 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hubble-tls\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012396 kubelet[1893]: I0129 12:03:57.012100 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.012396 kubelet[1893]: I0129 12:03:57.012114 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.012396 kubelet[1893]: I0129 12:03:57.012131 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.012396 kubelet[1893]: I0129 12:03:57.012182 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.012595 kubelet[1893]: I0129 12:03:57.012112 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.012595 kubelet[1893]: I0129 12:03:57.012200 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.012595 kubelet[1893]: I0129 12:03:57.012149 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-xtables-lock\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012595 kubelet[1893]: I0129 12:03:57.012247 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-cgroup\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012595 kubelet[1893]: I0129 12:03:57.012264 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-run\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012595 kubelet[1893]: I0129 12:03:57.012279 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-lib-modules\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012297 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-config-path\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012313 1893 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-clustermesh-secrets\") pod \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\" (UID: \"6c5bf952-7b29-4cb8-8ebb-7df04efe9abe\") " Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012345 1893 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hostproc\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012355 1893 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-bpf-maps\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012364 1893 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-etc-cni-netd\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012373 1893 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cni-path\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012383 1893 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-kernel\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.012863 kubelet[1893]: I0129 12:03:57.012391 1893 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-host-proc-sys-net\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.013105 kubelet[1893]: I0129 12:03:57.012425 1893 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-xtables-lock\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.013105 kubelet[1893]: I0129 12:03:57.012480 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.013105 kubelet[1893]: I0129 12:03:57.012504 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.013105 kubelet[1893]: I0129 12:03:57.012521 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:03:57.015616 kubelet[1893]: I0129 12:03:57.015593 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:03:57.015858 kubelet[1893]: I0129 12:03:57.015820 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:03:57.016101 kubelet[1893]: I0129 12:03:57.016064 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-kube-api-access-f9rmm" (OuterVolumeSpecName: "kube-api-access-f9rmm") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "kube-api-access-f9rmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:03:57.016288 kubelet[1893]: I0129 12:03:57.016251 1893 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" (UID: "6c5bf952-7b29-4cb8-8ebb-7df04efe9abe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:03:57.016712 systemd[1]: var-lib-kubelet-pods-6c5bf952\x2d7b29\x2d4cb8\x2d8ebb\x2d7df04efe9abe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:03:57.016905 systemd[1]: var-lib-kubelet-pods-6c5bf952\x2d7b29\x2d4cb8\x2d8ebb\x2d7df04efe9abe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:03:57.112705 kubelet[1893]: I0129 12:03:57.112567 1893 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-f9rmm\" (UniqueName: \"kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-kube-api-access-f9rmm\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.112705 kubelet[1893]: I0129 12:03:57.112587 1893 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-hubble-tls\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.112705 kubelet[1893]: I0129 12:03:57.112610 1893 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-cgroup\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.112705 kubelet[1893]: I0129 12:03:57.112618 1893 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-run\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.112705 kubelet[1893]: I0129 12:03:57.112625 1893 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-lib-modules\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.112705 kubelet[1893]: I0129 12:03:57.112633 1893 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-cilium-config-path\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.112705 kubelet[1893]: I0129 12:03:57.112641 1893 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe-clustermesh-secrets\") on node \"10.0.0.142\" DevicePath \"\"" Jan 29 12:03:57.256858 kubelet[1893]: I0129 12:03:57.256821 1893 scope.go:117] "RemoveContainer" containerID="a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e" Jan 29 12:03:57.258181 containerd[1542]: time="2025-01-29T12:03:57.258144291Z" level=info msg="RemoveContainer for \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\"" Jan 29 12:03:57.262086 containerd[1542]: time="2025-01-29T12:03:57.262058419Z" level=info msg="RemoveContainer for \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\" returns successfully" Jan 29 12:03:57.262396 kubelet[1893]: I0129 12:03:57.262358 1893 scope.go:117] "RemoveContainer" containerID="d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5" Jan 29 12:03:57.263618 containerd[1542]: time="2025-01-29T12:03:57.263593284Z" level=info msg="RemoveContainer for \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\"" Jan 29 12:03:57.266999 containerd[1542]: time="2025-01-29T12:03:57.266958560Z" level=info msg="RemoveContainer for \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\" returns successfully" Jan 29 12:03:57.267160 kubelet[1893]: I0129 12:03:57.267129 1893 scope.go:117] "RemoveContainer" containerID="6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be" Jan 29 12:03:57.268317 containerd[1542]: time="2025-01-29T12:03:57.268272821Z" level=info msg="RemoveContainer for \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\"" Jan 29 12:03:57.271948 containerd[1542]: time="2025-01-29T12:03:57.271904949Z" level=info msg="RemoveContainer for \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\" returns successfully" Jan 29 12:03:57.272156 kubelet[1893]: I0129 12:03:57.272124 1893 scope.go:117] "RemoveContainer" containerID="441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9" Jan 29 12:03:57.273158 containerd[1542]: time="2025-01-29T12:03:57.273125022Z" level=info msg="RemoveContainer for \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\"" Jan 29 12:03:57.276496 containerd[1542]: time="2025-01-29T12:03:57.276459220Z" level=info msg="RemoveContainer for \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\" returns successfully" Jan 29 12:03:57.276628 kubelet[1893]: I0129 12:03:57.276611 1893 scope.go:117] "RemoveContainer" containerID="7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869" Jan 29 12:03:57.277668 containerd[1542]: time="2025-01-29T12:03:57.277643445Z" level=info msg="RemoveContainer for \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\"" Jan 29 12:03:57.280686 containerd[1542]: time="2025-01-29T12:03:57.280653244Z" level=info msg="RemoveContainer for \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\" returns successfully" Jan 29 12:03:57.280831 kubelet[1893]: I0129 12:03:57.280802 1893 scope.go:117] "RemoveContainer" containerID="a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e" Jan 29 12:03:57.281125 containerd[1542]: time="2025-01-29T12:03:57.281054568Z" level=error msg="ContainerStatus for \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\": not found" Jan 29 12:03:57.281292 kubelet[1893]: E0129 12:03:57.281261 1893 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\": not found" containerID="a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e" Jan 29 12:03:57.281358 kubelet[1893]: I0129 12:03:57.281295 1893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e"} err="failed to get container status \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a935038ed8a12625120b2a31103b4dee079ae1bda984ce5058007a73fcd9c77e\": not found" Jan 29 12:03:57.281395 kubelet[1893]: I0129 12:03:57.281362 1893 scope.go:117] "RemoveContainer" containerID="d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5" Jan 29 12:03:57.281611 containerd[1542]: time="2025-01-29T12:03:57.281570619Z" level=error msg="ContainerStatus for \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\": not found" Jan 29 12:03:57.281776 kubelet[1893]: E0129 12:03:57.281728 1893 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\": not found" containerID="d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5" Jan 29 12:03:57.281816 kubelet[1893]: I0129 12:03:57.281776 1893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5"} err="failed to get container status \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d611c542b6c93b1b9751e12c4216171ae0c2e1a0d7e8a017d6eb7622eeb0e2d5\": not found" Jan 29 12:03:57.281816 kubelet[1893]: I0129 12:03:57.281808 1893 scope.go:117] "RemoveContainer" containerID="6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be" Jan 29 12:03:57.282066 containerd[1542]: time="2025-01-29T12:03:57.282024111Z" level=error msg="ContainerStatus for \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\": not found" Jan 29 12:03:57.282240 kubelet[1893]: E0129 12:03:57.282210 1893 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\": not found" containerID="6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be" Jan 29 12:03:57.282276 kubelet[1893]: I0129 12:03:57.282239 1893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be"} err="failed to get container status \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b1aad10748cc5c46298353ada01e027cfbe010971777c9e7acaf5db0400b1be\": not found" Jan 29 12:03:57.282276 kubelet[1893]: I0129 12:03:57.282262 1893 scope.go:117] "RemoveContainer" containerID="441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9" Jan 29 12:03:57.282501 containerd[1542]: time="2025-01-29T12:03:57.282462185Z" level=error msg="ContainerStatus for \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\": not found" Jan 29 12:03:57.282637 kubelet[1893]: E0129 12:03:57.282614 1893 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\": not found" containerID="441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9" Jan 29 12:03:57.282697 kubelet[1893]: I0129 12:03:57.282638 1893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9"} err="failed to get container status \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"441573d29db573ad48bcc53335527b3faf6d8a09c8e66b5d4dda7ca5788946a9\": not found" Jan 29 12:03:57.282697 kubelet[1893]: I0129 12:03:57.282653 1893 scope.go:117] "RemoveContainer" containerID="7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869" Jan 29 12:03:57.282872 containerd[1542]: time="2025-01-29T12:03:57.282831599Z" level=error msg="ContainerStatus for \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\": not found" Jan 29 12:03:57.282982 kubelet[1893]: E0129 12:03:57.282961 1893 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\": not found" containerID="7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869" Jan 29 12:03:57.283073 kubelet[1893]: I0129 12:03:57.282983 1893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869"} err="failed to get container status \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a31d1d047d138f8baab7fc48da57aefbf82b69a162394ddfcaa637510aa0869\": not found" Jan 29 12:03:57.502479 kubelet[1893]: E0129 12:03:57.502420 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:57.822860 systemd[1]: var-lib-kubelet-pods-6c5bf952\x2d7b29\x2d4cb8\x2d8ebb\x2d7df04efe9abe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9rmm.mount: Deactivated successfully. Jan 29 12:03:58.503579 kubelet[1893]: E0129 12:03:58.503508 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:59.154679 kubelet[1893]: I0129 12:03:59.154638 1893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" path="/var/lib/kubelet/pods/6c5bf952-7b29-4cb8-8ebb-7df04efe9abe/volumes" Jan 29 12:03:59.304251 kubelet[1893]: I0129 12:03:59.304206 1893 topology_manager.go:215] "Topology Admit Handler" podUID="235249ab-dcca-4ce3-bc09-ff69c1414ae3" podNamespace="kube-system" podName="cilium-operator-599987898-xhx6l" Jan 29 12:03:59.304251 kubelet[1893]: E0129 12:03:59.304253 1893 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" containerName="apply-sysctl-overwrites" Jan 29 12:03:59.304251 kubelet[1893]: E0129 12:03:59.304261 1893 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" containerName="clean-cilium-state" Jan 29 12:03:59.304473 kubelet[1893]: E0129 12:03:59.304269 1893 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" containerName="mount-cgroup" Jan 29 12:03:59.304473 kubelet[1893]: E0129 12:03:59.304276 1893 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" containerName="mount-bpf-fs" Jan 29 12:03:59.304473 kubelet[1893]: E0129 12:03:59.304282 1893 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" containerName="cilium-agent" Jan 29 12:03:59.304473 kubelet[1893]: I0129 12:03:59.304296 1893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5bf952-7b29-4cb8-8ebb-7df04efe9abe" containerName="cilium-agent" Jan 29 12:03:59.305890 kubelet[1893]: I0129 12:03:59.305858 1893 topology_manager.go:215] "Topology Admit Handler" podUID="f6d1ea8d-aa40-4e59-9444-c9d76e1abedf" podNamespace="kube-system" podName="cilium-8h9fv" Jan 29 12:03:59.323921 kubelet[1893]: I0129 12:03:59.323874 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-cilium-run\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.323921 kubelet[1893]: I0129 12:03:59.323922 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-xtables-lock\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324104 kubelet[1893]: I0129 12:03:59.323947 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-cilium-ipsec-secrets\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324104 kubelet[1893]: I0129 12:03:59.323983 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-host-proc-sys-net\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324104 kubelet[1893]: I0129 12:03:59.324010 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/235249ab-dcca-4ce3-bc09-ff69c1414ae3-cilium-config-path\") pod \"cilium-operator-599987898-xhx6l\" (UID: \"235249ab-dcca-4ce3-bc09-ff69c1414ae3\") " pod="kube-system/cilium-operator-599987898-xhx6l" Jan 29 12:03:59.324104 kubelet[1893]: I0129 12:03:59.324038 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdl27\" (UniqueName: \"kubernetes.io/projected/235249ab-dcca-4ce3-bc09-ff69c1414ae3-kube-api-access-hdl27\") pod \"cilium-operator-599987898-xhx6l\" (UID: \"235249ab-dcca-4ce3-bc09-ff69c1414ae3\") " pod="kube-system/cilium-operator-599987898-xhx6l" Jan 29 12:03:59.324104 kubelet[1893]: I0129 12:03:59.324065 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-cni-path\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324252 kubelet[1893]: I0129 12:03:59.324086 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-clustermesh-secrets\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324252 kubelet[1893]: I0129 12:03:59.324106 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-cilium-config-path\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324252 kubelet[1893]: I0129 12:03:59.324125 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-hubble-tls\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324252 kubelet[1893]: I0129 12:03:59.324143 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-bpf-maps\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324252 kubelet[1893]: I0129 12:03:59.324158 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-hostproc\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324252 kubelet[1893]: I0129 12:03:59.324173 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-etc-cni-netd\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324447 kubelet[1893]: I0129 12:03:59.324228 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-lib-modules\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324447 kubelet[1893]: I0129 12:03:59.324243 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-cilium-cgroup\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324447 kubelet[1893]: I0129 12:03:59.324256 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-host-proc-sys-kernel\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.324447 kubelet[1893]: I0129 12:03:59.324272 1893 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfhq\" (UniqueName: \"kubernetes.io/projected/f6d1ea8d-aa40-4e59-9444-c9d76e1abedf-kube-api-access-txfhq\") pod \"cilium-8h9fv\" (UID: \"f6d1ea8d-aa40-4e59-9444-c9d76e1abedf\") " pod="kube-system/cilium-8h9fv" Jan 29 12:03:59.503776 kubelet[1893]: E0129 12:03:59.503656 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:03:59.607289 kubelet[1893]: E0129 12:03:59.607253 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.607799 containerd[1542]: time="2025-01-29T12:03:59.607761870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xhx6l,Uid:235249ab-dcca-4ce3-bc09-ff69c1414ae3,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:59.610055 kubelet[1893]: E0129 12:03:59.610029 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.610393 containerd[1542]: time="2025-01-29T12:03:59.610367536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8h9fv,Uid:f6d1ea8d-aa40-4e59-9444-c9d76e1abedf,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:59.635793 containerd[1542]: time="2025-01-29T12:03:59.635705694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:59.635793 containerd[1542]: time="2025-01-29T12:03:59.635766849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:59.635793 containerd[1542]: time="2025-01-29T12:03:59.635783811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.635975 containerd[1542]: time="2025-01-29T12:03:59.635872988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.637804 containerd[1542]: time="2025-01-29T12:03:59.637714529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:59.637804 containerd[1542]: time="2025-01-29T12:03:59.637768660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:59.637804 containerd[1542]: time="2025-01-29T12:03:59.637786946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.640736 containerd[1542]: time="2025-01-29T12:03:59.640678199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.674665 containerd[1542]: time="2025-01-29T12:03:59.674624143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8h9fv,Uid:f6d1ea8d-aa40-4e59-9444-c9d76e1abedf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\"" Jan 29 12:03:59.675352 kubelet[1893]: E0129 12:03:59.675316 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.677004 containerd[1542]: time="2025-01-29T12:03:59.676946007Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:03:59.690999 containerd[1542]: time="2025-01-29T12:03:59.690952177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xhx6l,Uid:235249ab-dcca-4ce3-bc09-ff69c1414ae3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5fed667b0478b6dc3fcd825ff0896dc541adb84874c2650d03eb1918009f81\"" Jan 29 12:03:59.691519 kubelet[1893]: E0129 12:03:59.691500 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:03:59.692391 containerd[1542]: time="2025-01-29T12:03:59.692358299Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:03:59.693649 containerd[1542]: time="2025-01-29T12:03:59.693604843Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"617feab8ec00f5a4ba27c7b0e47d50eb55cdfbe2862323c4e56812b0e3e0c658\"" Jan 29 12:03:59.694005 containerd[1542]: time="2025-01-29T12:03:59.693939732Z" level=info msg="StartContainer for \"617feab8ec00f5a4ba27c7b0e47d50eb55cdfbe2862323c4e56812b0e3e0c658\"" Jan 29 12:03:59.753329 containerd[1542]: time="2025-01-29T12:03:59.753253297Z" level=info msg="StartContainer for \"617feab8ec00f5a4ba27c7b0e47d50eb55cdfbe2862323c4e56812b0e3e0c658\" returns successfully" Jan 29 12:03:59.795589 containerd[1542]: time="2025-01-29T12:03:59.795427363Z" level=info msg="shim disconnected" id=617feab8ec00f5a4ba27c7b0e47d50eb55cdfbe2862323c4e56812b0e3e0c658 namespace=k8s.io Jan 29 12:03:59.795589 containerd[1542]: time="2025-01-29T12:03:59.795496133Z" level=warning msg="cleaning up after shim disconnected" id=617feab8ec00f5a4ba27c7b0e47d50eb55cdfbe2862323c4e56812b0e3e0c658 namespace=k8s.io Jan 29 12:03:59.795589 containerd[1542]: time="2025-01-29T12:03:59.795507144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:00.267241 kubelet[1893]: E0129 12:04:00.267183 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:00.269456 containerd[1542]: time="2025-01-29T12:04:00.269381799Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:04:00.282353 containerd[1542]: time="2025-01-29T12:04:00.282298469Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9a6aa17320ca76e862fd3ad7d9d08ed7e7981839f460616f43a4dd4c7c7490db\"" Jan 29 12:04:00.282986 containerd[1542]: time="2025-01-29T12:04:00.282923432Z" level=info msg="StartContainer for \"9a6aa17320ca76e862fd3ad7d9d08ed7e7981839f460616f43a4dd4c7c7490db\"" Jan 29 12:04:00.342522 containerd[1542]: time="2025-01-29T12:04:00.342447959Z" level=info msg="StartContainer for \"9a6aa17320ca76e862fd3ad7d9d08ed7e7981839f460616f43a4dd4c7c7490db\" returns successfully" Jan 29 12:04:00.465104 kubelet[1893]: E0129 12:04:00.465043 1893 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:00.494341 containerd[1542]: time="2025-01-29T12:04:00.494296892Z" level=info msg="StopPodSandbox for \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\"" Jan 29 12:04:00.494493 containerd[1542]: time="2025-01-29T12:04:00.494393294Z" level=info msg="TearDown network for sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" successfully" Jan 29 12:04:00.494493 containerd[1542]: time="2025-01-29T12:04:00.494423270Z" level=info msg="StopPodSandbox for \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" returns successfully" Jan 29 12:04:00.494736 containerd[1542]: time="2025-01-29T12:04:00.494699108Z" level=info msg="RemovePodSandbox for \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\"" Jan 29 12:04:00.494736 containerd[1542]: time="2025-01-29T12:04:00.494735136Z" level=info msg="Forcibly stopping sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\"" Jan 29 12:04:00.494900 containerd[1542]: time="2025-01-29T12:04:00.494798284Z" level=info msg="TearDown network for sandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" successfully" Jan 29 12:04:00.504655 kubelet[1893]: E0129 12:04:00.504622 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:00.613044 containerd[1542]: time="2025-01-29T12:04:00.612875558Z" level=info msg="shim disconnected" id=9a6aa17320ca76e862fd3ad7d9d08ed7e7981839f460616f43a4dd4c7c7490db namespace=k8s.io Jan 29 12:04:00.613044 containerd[1542]: time="2025-01-29T12:04:00.612931343Z" level=warning msg="cleaning up after shim disconnected" id=9a6aa17320ca76e862fd3ad7d9d08ed7e7981839f460616f43a4dd4c7c7490db namespace=k8s.io Jan 29 12:04:00.613044 containerd[1542]: time="2025-01-29T12:04:00.612949757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:00.667773 containerd[1542]: time="2025-01-29T12:04:00.667717388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:00.667924 containerd[1542]: time="2025-01-29T12:04:00.667789314Z" level=info msg="RemovePodSandbox \"b0761be58103ddd43f910c83349fe2bc1da6db0f8499519f24153964a4f85ef9\" returns successfully" Jan 29 12:04:01.152994 kubelet[1893]: E0129 12:04:01.152952 1893 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:04:01.193731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053432275.mount: Deactivated successfully. Jan 29 12:04:01.272475 kubelet[1893]: E0129 12:04:01.271995 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:01.274259 containerd[1542]: time="2025-01-29T12:04:01.274204688Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:04:01.291625 containerd[1542]: time="2025-01-29T12:04:01.291584500Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"47a376e648c2f6a92ec7b49c83a107afd3e3670beba9eaa7397aea7c2038ffe1\"" Jan 29 12:04:01.292128 containerd[1542]: time="2025-01-29T12:04:01.292079711Z" level=info msg="StartContainer for \"47a376e648c2f6a92ec7b49c83a107afd3e3670beba9eaa7397aea7c2038ffe1\"" Jan 29 12:04:01.361374 containerd[1542]: time="2025-01-29T12:04:01.361324835Z" level=info msg="StartContainer for \"47a376e648c2f6a92ec7b49c83a107afd3e3670beba9eaa7397aea7c2038ffe1\" returns successfully" Jan 29 12:04:01.469439 containerd[1542]: time="2025-01-29T12:04:01.469009356Z" level=info msg="shim disconnected" id=47a376e648c2f6a92ec7b49c83a107afd3e3670beba9eaa7397aea7c2038ffe1 namespace=k8s.io Jan 29 12:04:01.469439 containerd[1542]: time="2025-01-29T12:04:01.469335800Z" level=warning msg="cleaning up after shim disconnected" id=47a376e648c2f6a92ec7b49c83a107afd3e3670beba9eaa7397aea7c2038ffe1 namespace=k8s.io Jan 29 12:04:01.469439 containerd[1542]: time="2025-01-29T12:04:01.469345077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:01.505332 kubelet[1893]: E0129 12:04:01.505270 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:01.564689 containerd[1542]: time="2025-01-29T12:04:01.564636406Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:01.565389 containerd[1542]: time="2025-01-29T12:04:01.565315391Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 12:04:01.566465 containerd[1542]: time="2025-01-29T12:04:01.566433100Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:01.567877 containerd[1542]: time="2025-01-29T12:04:01.567843681Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.875455265s" Jan 29 12:04:01.567913 containerd[1542]: time="2025-01-29T12:04:01.567874379Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 12:04:01.569943 containerd[1542]: time="2025-01-29T12:04:01.569892760Z" level=info msg="CreateContainer within sandbox \"bd5fed667b0478b6dc3fcd825ff0896dc541adb84874c2650d03eb1918009f81\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:04:01.582435 containerd[1542]: time="2025-01-29T12:04:01.582390981Z" level=info msg="CreateContainer within sandbox \"bd5fed667b0478b6dc3fcd825ff0896dc541adb84874c2650d03eb1918009f81\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92c258b217149cf3e393fadb46752e7a6fb083540257df21567b118154d7b231\"" Jan 29 12:04:01.582856 containerd[1542]: time="2025-01-29T12:04:01.582827081Z" level=info msg="StartContainer for \"92c258b217149cf3e393fadb46752e7a6fb083540257df21567b118154d7b231\"" Jan 29 12:04:01.631549 containerd[1542]: time="2025-01-29T12:04:01.631509793Z" level=info msg="StartContainer for \"92c258b217149cf3e393fadb46752e7a6fb083540257df21567b118154d7b231\" returns successfully" Jan 29 12:04:02.176830 kubelet[1893]: I0129 12:04:02.176776 1893 setters.go:580] "Node became not ready" node="10.0.0.142" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T12:04:02Z","lastTransitionTime":"2025-01-29T12:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 12:04:02.275019 kubelet[1893]: E0129 12:04:02.274979 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:02.276716 kubelet[1893]: E0129 12:04:02.276683 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:02.278419 containerd[1542]: time="2025-01-29T12:04:02.278364176Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:04:02.305327 kubelet[1893]: I0129 12:04:02.305273 1893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xhx6l" podStartSLOduration=1.428591847 podStartE2EDuration="3.305256925s" podCreationTimestamp="2025-01-29 12:03:59 +0000 UTC" firstStartedPulling="2025-01-29 12:03:59.692012951 +0000 UTC m=+59.421660359" lastFinishedPulling="2025-01-29 12:04:01.568678029 +0000 UTC m=+61.298325437" observedRunningTime="2025-01-29 12:04:02.286904719 +0000 UTC m=+62.016561004" watchObservedRunningTime="2025-01-29 12:04:02.305256925 +0000 UTC m=+62.034904333" Jan 29 12:04:02.315984 containerd[1542]: time="2025-01-29T12:04:02.315942147Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4caa0c7a6a9c464b582ddb9f72eb95e616226cba583ad03f7ee4b459fa79b9f\"" Jan 29 12:04:02.316428 containerd[1542]: time="2025-01-29T12:04:02.316385209Z" level=info msg="StartContainer for \"f4caa0c7a6a9c464b582ddb9f72eb95e616226cba583ad03f7ee4b459fa79b9f\"" Jan 29 12:04:02.505900 kubelet[1893]: E0129 12:04:02.505774 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:02.561844 containerd[1542]: time="2025-01-29T12:04:02.561777502Z" level=info msg="StartContainer for \"f4caa0c7a6a9c464b582ddb9f72eb95e616226cba583ad03f7ee4b459fa79b9f\" returns successfully" Jan 29 12:04:02.577752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4caa0c7a6a9c464b582ddb9f72eb95e616226cba583ad03f7ee4b459fa79b9f-rootfs.mount: Deactivated successfully. Jan 29 12:04:02.588069 containerd[1542]: time="2025-01-29T12:04:02.588004078Z" level=info msg="shim disconnected" id=f4caa0c7a6a9c464b582ddb9f72eb95e616226cba583ad03f7ee4b459fa79b9f namespace=k8s.io Jan 29 12:04:02.588069 containerd[1542]: time="2025-01-29T12:04:02.588062398Z" level=warning msg="cleaning up after shim disconnected" id=f4caa0c7a6a9c464b582ddb9f72eb95e616226cba583ad03f7ee4b459fa79b9f namespace=k8s.io Jan 29 12:04:02.588069 containerd[1542]: time="2025-01-29T12:04:02.588071926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:03.281608 kubelet[1893]: E0129 12:04:03.281324 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:03.281608 kubelet[1893]: E0129 12:04:03.281543 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:03.286010 containerd[1542]: time="2025-01-29T12:04:03.285970230Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:04:03.472765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219381032.mount: Deactivated successfully. Jan 29 12:04:03.506522 kubelet[1893]: E0129 12:04:03.506476 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:03.684632 containerd[1542]: time="2025-01-29T12:04:03.684550420Z" level=info msg="CreateContainer within sandbox \"8ec75a24d812037527d1d017947d777deb1349c4273a13aaf3e86776c78f2ec4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a30f1b4b41e9a2c22736844d9efb80cb467718debef63d1460098df465b85e1\"" Jan 29 12:04:03.685227 containerd[1542]: time="2025-01-29T12:04:03.685118206Z" level=info msg="StartContainer for \"9a30f1b4b41e9a2c22736844d9efb80cb467718debef63d1460098df465b85e1\"" Jan 29 12:04:03.873828 containerd[1542]: time="2025-01-29T12:04:03.873773466Z" level=info msg="StartContainer for \"9a30f1b4b41e9a2c22736844d9efb80cb467718debef63d1460098df465b85e1\" returns successfully" Jan 29 12:04:04.167431 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 12:04:04.285803 kubelet[1893]: E0129 12:04:04.285766 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:04.506833 kubelet[1893]: E0129 12:04:04.506715 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:05.507518 kubelet[1893]: E0129 12:04:05.507467 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:05.611890 kubelet[1893]: E0129 12:04:05.611863 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:06.508424 kubelet[1893]: E0129 12:04:06.508370 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:07.455300 systemd-networkd[1240]: lxc_health: Link UP Jan 29 12:04:07.466680 systemd-networkd[1240]: lxc_health: Gained carrier Jan 29 12:04:07.510438 kubelet[1893]: E0129 12:04:07.508480 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:07.612670 kubelet[1893]: E0129 12:04:07.612633 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:07.645107 kubelet[1893]: I0129 12:04:07.644839 1893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8h9fv" podStartSLOduration=8.644821454 podStartE2EDuration="8.644821454s" podCreationTimestamp="2025-01-29 12:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:04.322943061 +0000 UTC m=+64.052590469" watchObservedRunningTime="2025-01-29 12:04:07.644821454 +0000 UTC m=+67.374468862" Jan 29 12:04:08.292294 kubelet[1893]: E0129 12:04:08.292255 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:08.509489 kubelet[1893]: E0129 12:04:08.509438 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:09.221572 systemd-networkd[1240]: lxc_health: Gained IPv6LL Jan 29 12:04:09.293699 kubelet[1893]: E0129 12:04:09.293666 1893 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:04:09.509857 kubelet[1893]: E0129 12:04:09.509743 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:10.510126 kubelet[1893]: E0129 12:04:10.510081 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:11.510833 kubelet[1893]: E0129 12:04:11.510775 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:12.248117 systemd[1]: run-containerd-runc-k8s.io-9a30f1b4b41e9a2c22736844d9efb80cb467718debef63d1460098df465b85e1-runc.EOekM0.mount: Deactivated successfully. Jan 29 12:04:12.511649 kubelet[1893]: E0129 12:04:12.511526 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:13.512547 kubelet[1893]: E0129 12:04:13.512505 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:14.512670 kubelet[1893]: E0129 12:04:14.512606 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:15.513286 kubelet[1893]: E0129 12:04:15.513235 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:04:16.514299 kubelet[1893]: E0129 12:04:16.514178 1893 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"